Table of contents(21 chapters)
This chapter gives one version of the recent history of evaluation case study. It looks back over the emergence of case study as a sociological method, developed in the early years of the 20th Century and celebrated and elaborated by the Chicago School of urban sociology at Chicago University, starting throughout the 1920s and 1930s. Some of the basic methods, including constant comparison, were generated at that time. Only partly influenced by this methodological movement, an alliance between an Illinois-based team in the United States and a team at the University of East Anglia in the United Kingdom recast the case method as a key tool for the evaluation of social and educational programmes.
The full ‘storytelling’ paper was written in 1978 and was influential in its time. It is reprinted here, introduced by an Author's reflection on it in 2014. The chapter describes the author’s early disenchantment with traditional approaches to educational research.
He regards educational research as, at best, a misnomer, since little of it is preceded by a search. Entitled educational researchers often fancy themselves as scientists at work. But those whom they attempt to describe are often artists at work. Statistical methodologies enable educational researchers to measure something, but their measurements can neither capture nor explain splendid teaching.
Since such a tiny fraction of what is published in educational research journals influences school practitioners, professional researchers should risk trying alternative approaches to uncovering what is going on in schools.
Story telling is posited as a possible key to producing insights that inform and ultimately improve educational practice. It advocates openness to broad inquiry into the culture of the educational setting.
Much programme and policy evaluation yields to the pressure to report on the productivity of programmes and is perforce compliant with the conditions of contract. Too often the view of these evaluations is limited to a literal reading of the analytical challenge. If we are evaluating X we look critically at X1, X2 and X3. There might be cause for embracing adjoining data sources such as W1 and Y1. This ignores frequent realities that an evaluation specification is only an approximate starting point for an unpredictable journey into comprehensive understanding; that the specification represents only that which is wanted by the sponsor, and not all that may be needed; and that the contractual specification too often insists on privileging the questions and concerns of a few. Case study evaluation proves an alternative that allows for the less-than-literal in the form of analysis of contingencies – how people, phenomena and events may be related in dynamic ways, how context and action have only a blurred dividing line and how what defines the case as a case may only emerge late in the study.
What is our unit of analysis and by implication what are the boundaries of our cases? This is a question we grapple with at the start of every new project. We observe that case studies are often referred to in an unreflective manner and are often conflated with geographical location. Neat units of analysis and clearly bounded cases usually do not reflect the messiness encountered during qualitative fieldwork. Others have puzzled over these questions. We briefly discuss work to problematise the use of households as units of analysis in the context of apartheid South Africa and then consider work of other anthropologists engaged in multi-site ethnography. We have found the notion of ‘following’ chains, paths and threads across sites to be particularly insightful.
We present two examples from our work studying commissioning in the English National Health Service (NHS) to illustrate our struggles with case studies. The first is a study of Practice-based Commissioning groups and the second is a study of the early workings of Clinical Commissioning Groups. In both instances we show how ideas of what constituted our unit of analysis and the boundaries of our cases became less clear as our research progressed. We also discuss pressures we experienced to add more case studies to our projects. These examples illustrate the primacy for us of understanding interactions between place, local history and rapidly developing policy initiatives. Understanding cases in this way can be challenging in a context where research funders hold different views of what constitutes a case.
A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley Shire, New South Wales, Australia. In-depth interviews with people with dementia (PWD), their carers, programme staff, family members and service providers and document analysis including analysis of client case notes and client database were used.
The strengths of the case study approach included: (i) simultaneous evaluation of programme process and worth, (ii) eliciting the theory of change and addressing the problem of attribution, (iii) demonstrating the impact of the programme on earlier steps identified along the causal pathway (iv) understanding the complexity of confounding factors, (v) eliciting the critical role of the social, cultural and political context, (vi) understanding the importance of influences contributing to differences in programme impact for different participants and (vii) providing insight into how programme participants experience the value of the programme including unintended benefits.
The broader case of the collective experience of dementia and as part of this experience, the impact of a mobile programme of support and education, in a predominately rural area grew from the investigation of the programme experience of ‘individual cases’ of carers and PWD. Investigation of living conditions, relationships, service interactions through observation and increased depth of interviews with service providers and family members would have provided valuable perspectives and thicker description of the case for increased understanding of the case and strength of the evaluation.
This chapter describes a case study of a social change project in medical education (primary care), in which the critical interpretive evaluation methodology I sought to use came up against the “positivist” approach preferred by senior figures in the medical school who commissioned the evaluation.
I describe the background to the study and justify the evaluation approach and methods employed in the case study – drawing on interviews, document analysis, survey research, participant observation, literature reviews, and critical incidents – one of which was the decision by the medical school hierarchy to restrict my contact with the lay community in my official evaluation duties. The use of critical ethnography also embraced wider questions about circuits of power and the social and political contexts within which the “social change” effort occurred.
Central to my analysis is John Gaventa’s theory of power as “the internalization of values that inhibit consciousness and participation while encouraging powerlessness and dependency.” Gaventa argued, essentially, that the evocation of power has as much to do with preventing decisions as with bringing them about. My chosen case illustrated all three dimensions of power that Gaventa originally uncovered in his portrait of self-interested Appalachian coal mine owners: (1) communities were largely excluded from decision making power; (2) issues were avoided or suppressed; and (3) the interests of the oppressed went largely unrecognized.
The account is auto-ethnographic, hence the study is limited by my abilities, biases, and subject positions. I reflect on these in the chapter.
The study not only illustrates the unique contribution of case study as a research methodology but also its low status in the positivist paradigm adhered to by many doctors. Indeed, the tension between the potential of case study to illuminate the complexities of community engagement through thick description and the rejection of this very method as inherently “flawed” suggests that medical education may be doomed to its neoliberal fate for some time to come.
This is a personal narrative, but I trust not a self-regarding one. For more years than I care to remember I have been working in the field of curriculum (or ‘program’) evaluation. The field by any standards is dispersed and fragmented, with variously ascribed purposes, roles, implicit values, political contexts, and social research methods. Attempts to organize this territory into an ‘evaluation theory tree’ (e.g. Alkin, M., & Christie, C. (2003). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage) have identified broad types or ‘branches’, but the migration of specific characteristics (like ‘case study’) or individual practitioners across the boundaries has tended to undermine the analysis at the level of detail, and there is no suggestion that it represents a cladistic taxonomy. There is, however, general agreement that the roots of evaluation practice tap into a variety of cultural sources, being grounded bureaucratically in (potentially conflicting) doctrines of accountability and methodologically in discipline-based or pragmatically eclectic formats for systematic social enquiry.
In general, this diversity is not treated as problematic. The professional evaluation community has increasingly taken the view (‘let all the flowers grow’) that evaluation models can be deemed appropriate across a wide spectrum, with their appropriateness determined by the nature of the task and its context, including in relation to hybrid studies using mixed models or displaying what Geertz (Geertz, C. (1980/1993). Blurred genres: The refiguration of social thought. The American Scholar, 49(2), 165–179) called ‘blurred genres’. However, from time to time historic tribal rivalries re-emerge as particular practitioners feel the need to defend their modus operandi (and thereby their livelihood) against paradigm shifts or governments and other sponsors of program evaluation seeking for ideological reasons to prioritize certain types of study at the expense of others. The latter possibility poses a potential threat that needs to be taken seriously by evaluators within the broad tradition showcased in this volume, interpretive qualitative case studies of educational programs that combine naturalistic description (often ‘thick’; Geertz, C. (1973). Thick description: Towards an interpretive theory of culture. In The interpretation of culture (pp. 3–30). New York, NY: Basic Books.) description with a values-orientated analysis of their implications. Such studies are more likely to seek inspiration from anthropology or critical discourse analysis than from the randomly controlled trials familiar in medical research or laboratory practice in the physical sciences, despite the impressive rigour of the latter in appropriate contexts. It is the risk of ideological allegiance that I address in this chapter.
This chapter considers the usefulness and validity of public inquiries as a source of data and preliminary interpretation for case study research. Using two contrasting examples – the Bristol Inquiry into excess deaths in a children’s cardiac surgery unit and the Woolf Inquiry into a breakdown of governance at the London School of Economics (LSE) – I show how academics can draw fruitfully on, and develop further analysis from, the raw datasets, published summaries and formal judgements of public inquiries.
Academic analysis of public inquiries can take two broad forms, corresponding to the two main approaches to individual case study defined by Stake: instrumental (selecting the public inquiry on the basis of pre-defined theoretical features and using the material to develop and test theoretical propositions) and intrinsic (selecting the public inquiry on the basis of the particular topic addressed and using the material to explore questions about what was going on and why).
The advantages of a public inquiry as a data source for case study research typically include a clear and uncontested focus of inquiry; the breadth and richness of the dataset collected; the exceptional level of support available for the tasks of transcribing, indexing, collating, summarising and so on; and the expert interpretations and insights of the inquiry’s chair (with which the researcher may or may not agree). A significant disadvantage is that whilst the dataset collected for a public inquiry is typically ‘rich’, it has usually been collected under far from ideal research conditions. Hence, while public inquiries provide a potentially rich resource for researchers, those who seek to use public inquiry data for research must justify their choice on both ethical and scientific grounds.
This chapter introduces the notion of the ‘Innovation Story’ as a methodological approach to public policy evaluation, which builds in greater opportunity for learning and reflexivity.
The Innovation Story is an adaptation of the case study approach and draws on participatory action research traditions. It is a structured narrative that describes a particular public policy innovation in the personalised contexts in which it is experienced by innovators. Its construction involves a discursive process through which involved actors tell their story, explain it to others, listen to their questions and co-construct knowledge of change together.
The approach was employed to elaborate five case studies of place-based leadership and public service innovation in the United Kingdom, The Netherlands and Mexico. The key findings are that spaces in which civic leaders come together from different ‘realms’ of leadership in a locality (community, business, professional managers and political leaders) can become innovation zones that foster inventive behaviour. Much depends on the quality of civic leadership, and its capacity to foster genuine dialogue and co-responsibility. This involves the evaluation seeking out influential ideas from below the level of strategic management, and documenting leadership activities of those who are skilled at ‘boundary crossing’ – for example, communicating between sectors.
The evaluator can be a key player in this process, as a convenor of safe spaces for actors to come together to discuss and deliberate before returning to practice. Our approach therefore argues for a particular awareness of the political nature of policy evaluation in terms of negotiating these spaces, and the need for politically engaged evaluators who are skilled in facilitating collective learning processes.
What are the boundaries of a case study, and what should new evaluators do when these boundaries are breached? How does a new evaluator interpret the breakdown of communication, how do new evaluators protect themselves when the evaluation fails? This chapter discusses the journey of an evaluator new to the field of qualitative evaluative inquiry. Integrating the perspective of a senior evaluator, the authors reflect on three key experiences that informed the new evaluator. The authors hope to provide a rare insight into case study practice as emotional issues turn out to be just as complex as the methodology used.