Search results1 – 2 of 2
The purpose of this paper is to examine hypothesized links between the Dana Center Mathematics Pathways’ (DCMP) Foundations of Mathematical Reasoning curriculum and the…
The purpose of this paper is to examine hypothesized links between the Dana Center Mathematics Pathways’ (DCMP) Foundations of Mathematical Reasoning curriculum and the four hypothesized sources of self-efficacy. The sample of developmental mathematics students who were taught with a curriculum that incorporates active and collaborative learning reported increased ratings on social persuasions from the beginning to the end of the semester.
The study examines changes in the four sources of self-efficacy. Students completed a pre- and post-survey. Non-parametric methods were conducted to measure changes.
The paper provides empirical insights into changes in the four sources of self-efficacy with the implementation of a new curriculum in developmental mathematics classrooms. Students in the DCMP Foundation course increased their ratings on social persuasions and mastery experiences and decreased their ratings on physiological states. The largest proportion of variability in the four sources that was accounted for by course grade was mastery experiences followed by vicarious experiences, social persuasions and physiological states.
A control group was not included. Therefore, comparisons between students enrolled in the intervention course and a traditional course were not possible.
An implication of the study is that a curriculum that has an emphasis on face-to-face communication with collaborative learning activities might be linked to more positive measures of the sources of self-efficacy.
This paper fulfils a need to study how the implementation of an alternative curriculum in developmental mathematics classrooms can be linked to students’ self-efficacy.
Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat…
Among the gold standards in human resource development (HRD) research are studies that test theoretically developed hypotheses and use experimental designs. A somewhat typical experimental design would involve collecting pretest and posttest data on individuals assigned to a control or experimental group. Data from such a design that considered if training made a difference in knowledge, skills or attitudes, for example, could help advance practice. Using simulated datasets, situated in the example of a scenario-planning intervention, this paper aims to show that choosing a data analysis path that does not consider the associated assumptions can misrepresent findings and resulting conclusions. A review of HRD articles in a select set of journals indicated that some researchers reporting on pretest-posttest designs with two groups were not reporting associated statistical assumptions and reported results from repeated-measures analysis of variance that are considered of minimal utility.
Using heuristic datasets, situated in the example of a scenario-planning intervention, this paper will show that choosing a data analysis path that does not consider the associated assumptions can misrepresent findings and resulting conclusions. Journals in the HRD field that conducted pretest-posttest control group designs were coded.
The authors' illustrations provide evidence for the importance of testing assumptions and the need for researchers to consider alternate analyses when assumptions fail, particularly the homogeneity of regression slopes assumption.
This paper provides guidance to researchers faced with analyzing data from a pretest-posttest control group experimental design, so that they may select the most parsimonious solution that honors the ecological validity of the data.