Evaluation vs. Research

1. While evaluation and research do overlap on many aspects, and can often be overgeneralized when being defined, it is important to understand that these two processes are very different. By intent evaluation concerns itself with informing particular stakeholders about programs or goals while research is looking to inform those “well beyond the stakeholders” in order to move forward with new learning and knowledge (Boulmetis & Dutmin, 2011, p. 171). By design and by purpose priorities it is often fairly obvious whether or not evaluation or research is being conducted. Research is often more concerned with cause-and-effect, whereas evaluation is designed so that “programs can change” (Boulmetis & Dutmin, 2011, p. 173). In research, the sample is important as it needs to be nonbiased and appropriately collected. However with evaluation, as it is concerned with a specific group or organization, by design will be a limited generalized sample.
Again a major difference between both processes is the audience. Evaluations are very much designed and implemented and shared with particular stakeholders for specific purposes, in fact, often the results are never shared beyond this group. However, research is shared via peer review and often published for others to learn from the research.
2. The Wikipedia article discusses the reasons behind conducting the evaluation as efficiency and effectiveness, it leaves out the possible purpose of impact. The entry targets the importance of stakeholders as the reason that many evaluations are started and also discusses the importance of engaging with stakeholders. I thought it was interesting that the entry also laid out the pros and cons of external versus internal evaluators which we discussed in our coursework as well. I thought something lacking from the entry was an outline of the major evaluation designs (e.g. goal-free model). Both our course work and the Wikipedia entry highlight the evaluation framework as stages requiring 1) needs assessment and 2) program planning and then the stages differ. I didn’t notice a specific Evaluator’s Program Description framework within the online entry article which was a major piece of our work so far this semester.
3. First off, I found it interesting that there is an Amercian Evaluation Association, I definitely have not run into very specific organization before. I appreciate that the organization offers a lot by way of connections to its members through groups and social media connections. I took the opportunity to search for evaluators for the District of Columbia and was amazed by a number of names and organizations that filtered for possible connections if I was looking for an evaluator. I took some time to review their mission and vision and was pleased to see they have quite the set of governing policies complete with a guidebook, end goals, and strategic plans. This reminds me of the course I took on Instructional Design and how fascinating it was that this was a whole new world of careers and job focus.

Reference

Boulmetis, J., & Dutwin, P. (2011). The ABCs of evaluation: Timeless techniques for program and project managers. San Francisco: Jossey-Bass

Advertisements

ST Math – Blended Learning in California

The MIND Research Institute contracted the Evaluation Research Program at WestEd to assess their blended learning program, Spatial-Temporal Math (ST Math), in an elementary school setting in California. WestEd used data from the CST (California Standards Test) from grades 2-5 to compare CST scores of students using ST Math to those similar groups of students not using ST Math. WestEd compared first-year full implementation usage at schools to those not using ST Math. Full implementation usage was defined as 85% of students completing at least 50% of the blended learning curriculum during the course of the school year. Results showed that grades using ST Math showed higher scores than those students not provided with ST Math and that the most significant gains occurred in 2nd, 3rd, and 5th grades.

Interestingly enough the research group made sure to clearly define that grades were made up of all the classes within participating schools. For example, if a school had six grade 2 classes, all were included regardless of individual classroom implementation percentages. And that 212 schools were included in the full implementation groupings. It was also interesting how detailed the evaluation went by breaking down the CST data not only overall but also by growth for advanced and proficient students. I would have been interested to see results for far below basic and basic performance level as well.

It was also intriguing how clearly WestEd wrote up the limitations to the evaluation. They suggested that data may have been affected as ST Math was implemented by choice within schools and motivation, surrounding math in general, could have been high with the self-elected new program. The evaluation groups also suggested that some limitation existed as it was impossible to control that students hadn’t used the program in previous years, even though the study indicated it was a first-year usage evaluation.

Wendt, S., Rice, J., & Nakamoto J. (2014). Evaluation of the MIND Research Institute’s spatial-temporal math (ST Math) program in California. WestEd. Retrieved from https://www.wested.org/wp-content/files_mf/1415393677Evaluation_STMath_Program_20141107.pdf