This is a personal narrative, but I trust not a self-regarding one. For more years than I care to remember I have been working in the field of curriculum (or ‘program’) evaluation. The field by any standards is dispersed and fragmented, with variously ascribed purposes, roles, implicit values, political contexts, and social research methods. Attempts to organize this territory into an ‘evaluation theory tree’ (e.g. Alkin, M., & Christie, C. (2003). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage) have identified broad types or ‘branches’, but the migration of specific characteristics (like ‘case study’) or individual practitioners across the boundaries has tended to undermine the analysis at the level of detail, and there is no suggestion that it represents a cladistic taxonomy. There is, however, general agreement that the roots of evaluation practice tap into a variety of cultural sources, being grounded bureaucratically in (potentially conflicting) doctrines of accountability and methodologically in discipline-based or pragmatically eclectic formats for systematic social enquiry.
In general, this diversity is not treated as problematic. The professional evaluation community has increasingly taken the view (‘let all the flowers grow’) that evaluation models can be deemed appropriate across a wide spectrum, with their appropriateness determined by the nature of the task and its context, including in relation to hybrid studies using mixed models or displaying what Geertz (Geertz, C. (1980/1993). Blurred genres: The refiguration of social thought. The American Scholar, 49(2), 165–179) called ‘blurred genres’. However, from time to time historic tribal rivalries re-emerge as particular practitioners feel the need to defend their modus operandi (and thereby their livelihood) against paradigm shifts or governments and other sponsors of program evaluation seeking for ideological reasons to prioritize certain types of study at the expense of others. The latter possibility poses a potential threat that needs to be taken seriously by evaluators within the broad tradition showcased in this volume, interpretive qualitative case studies of educational programs that combine naturalistic description (often ‘thick’; Geertz, C. (1973). Thick description: Towards an interpretive theory of culture. In The interpretation of culture (pp. 3–30). New York, NY: Basic Books.) description with a values-orientated analysis of their implications. Such studies are more likely to seek inspiration from anthropology or critical discourse analysis than from the randomly controlled trials familiar in medical research or laboratory practice in the physical sciences, despite the impressive rigour of the latter in appropriate contexts. It is the risk of ideological allegiance that I address in this chapter.
Jenkins, D. (2015), "‘Lead’ Standard Evaluation", Case Study Evaluation: Past, Present and Future Challenges (Advances in Program Evaluation, Vol. 15), Emerald Group Publishing Limited, Bingley, pp. 157-176. https://doi.org/10.1108/S1474-786320140000015017
Emerald Group Publishing Limited
Copyright © 2015 Emerald Group Publishing Limited