Search results

21 – 30 of over 130000
Book part
Publication date: 9 August 2012

Matthew Birnbaum, Kim Okahara and Mallory Warner

This chapter examines the challenges of developing and implementing a new national evaluation approach in a complex library funding program. The approach shifts a prior…

Abstract

This chapter examines the challenges of developing and implementing a new national evaluation approach in a complex library funding program. The approach shifts a prior outcome-based evaluation legacy using logic models to one relying on nonlinear logic mapping. The new approach is explored by studying the Measuring Success initiative, launched in March 2011 for the largest funded library services program in the United States, the Institute for Museum and Library Services formula-based Grants to States program. The chapter explores the relative benefits of nonlinear logic maps and emphasizes the importance of scaling evaluation from individual projects toward clusters of similar library services and activities. The introduction of this new evaluation approach required a new conceptual frame, drawing on diffusion, strategic planning, and other current evaluation theories. The new approach can be widely generalized to many library services, although its focus is on a uniform interorganizational social network embedded in service delivery. The chapter offers a new evaluation perspective for library service professionals by moving from narrow methodological concerns involving measurement to broader administrative issues including diffusion of library use, effective integration of systematic data into program planning and management, and strengthening multi-stakeholder communication.

Book part
Publication date: 3 January 2015

Samantha Abbato

A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley…

Abstract

A case study methodology was applied as a major component of a mixed-methods approach to the evaluation of a mobile dementia education and support service in the Bega Valley Shire, New South Wales, Australia. In-depth interviews with people with dementia (PWD), their carers, programme staff, family members and service providers and document analysis including analysis of client case notes and client database were used.

The strengths of the case study approach included: (i) simultaneous evaluation of programme process and worth, (ii) eliciting the theory of change and addressing the problem of attribution, (iii) demonstrating the impact of the programme on earlier steps identified along the causal pathway (iv) understanding the complexity of confounding factors, (v) eliciting the critical role of the social, cultural and political context, (vi) understanding the importance of influences contributing to differences in programme impact for different participants and (vii) providing insight into how programme participants experience the value of the programme including unintended benefits.

The broader case of the collective experience of dementia and as part of this experience, the impact of a mobile programme of support and education, in a predominately rural area grew from the investigation of the programme experience of ‘individual cases’ of carers and PWD. Investigation of living conditions, relationships, service interactions through observation and increased depth of interviews with service providers and family members would have provided valuable perspectives and thicker description of the case for increased understanding of the case and strength of the evaluation.

Details

Case Study Evaluation: Past, Present and Future Challenges
Type: Book
ISBN: 978-1-78441-064-3

Keywords

Open Access
Article
Publication date: 16 August 2022

Patricia Lannen and Lisa Jones

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper…

Abstract

Purpose

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper aims to argue that a singular focus on evaluation has limited the ways in which science and research is incorporated into program development, and advocate instead for the use of a new concept, “scientific accompaniment,” to expand and guide program development and testing.

Design/methodology/approach

A heuristic is provided to guide research–practice teams in assessing the program’s developmental stage and level of evidence.

Findings

In an idealized pathway, scientific accompaniment begins early in program development, with ongoing input from both practitioners and researchers, resulting in programs that are both effective and scalable. The heuristic also provides guidance for how to “catch up” on evidence when program development and science utilization are out of sync.

Originality/value

While implementation models provide ideas on improving the use of evidence-based practices, social service programs suffer from a significant lack of research and evaluation. Evaluation resources are typically not used by social service program developers and collaboration with researchers happens late in program development, if at all. There are few resources or models that encourage and guide the use of science and evaluation across program development.

Details

Journal of Children's Services, vol. 17 no. 4
Type: Research Article
ISSN: 1746-6660

Keywords

Article
Publication date: 28 October 2014

Carla Curado and Susana Martins Teixeira

This study’s purpose is to contribute to literature on training evaluation following Kirkpatrick’s four-levels model and estimating each training program’s return on investment…

4058

Abstract

Purpose

This study’s purpose is to contribute to literature on training evaluation following Kirkpatrick’s four-levels model and estimating each training program’s return on investment (ROI) using evidence from a small firm.

Design/methodology/approach

This case study uses data collected at a logistics company based upon training output indicators like training program evaluation data; individual performance evaluation reports; information on attained objectives; service and productivity levels; quality audit reports; and accounting data.

Findings

Results show that all the training programs addressed report evaluation procedures at the four different levels (reactions, learning, behavior and results). ROI for each training program was estimated based upon costs and benefits associated to each program. The two training programs presenting above-average returns address work quality and conditions. The program addressing corporate social responsibility issues produced below-average results.

Research limitations/implications

Limitations to this study may result from collecting data in a single moment in time and using data from a single organization, excluding generalization and extrapolation of results.

Practical implications

This case study should inspire managers in small and medium enterprises (SME) to implement training evaluation practices and ROI estimation. Having the ROI estimation available allows better management of the training budget, as ROI’s presentation is an argument to assign value and progress.

Originality/value

The originality of this study regards the way it reports training evaluation practices at the four levels established by Kirkpatrick’s framework (2005) and complements it with ROI estimation regarding five training courses run at a Portuguese SME logistics firm.

Details

European Journal of Training and Development, vol. 38 no. 9
Type: Research Article
ISSN: 2046-9012

Keywords

Article
Publication date: 23 November 2021

Philip M. Reeves, Jennifer Claydon and Glen A. Davenport

Program evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to…

Abstract

Purpose

Program evaluation stands as an evidence-based process that would allow institutions to document and improve the quality of graduate programs and determine how to respond to growing calls for aligning training models to economic realities. This paper aims to present the current state of evaluation in research-based doctoral programs in STEM fields.

Design/methodology/approach

To highlight the recent evaluative processes, the authors restricted the initial literature search to papers published in English between 2008 and 2019. As the authors were motivated by the shift at NIH, this review focuses on STEM programs, though papers on broader evaluation efforts were included as long as STEM-specific results could be identified. In total, 137 papers were included in the final review.

Findings

Only nine papers presented an evaluation of a full program. Instead, papers focused on evaluating individual components of a graduate program, testing small interventions or examining existing national data sets. The review did not find any documents that focused on the continual monitoring of training quality.

Originality/value

This review can serve as a resource, encourage transparency and provide motivation for faculty and administrators to gather and use assessment data to improve training models. By understanding how existing evaluations are conducted and implemented, administrators can apply evidence-based methodologies to ensure the highest quality training to best prepare students.

Details

Studies in Graduate and Postdoctoral Education, vol. 13 no. 2
Type: Research Article
ISSN: 2398-4686

Keywords

Article
Publication date: 5 December 2017

William J. Ashton and Rajesh V. Manchanda

This paper aims to report a research approach that explores how to use evaluations of previous social marketing efforts to assess and guide a new shelterbelt program called…

Abstract

Purpose

This paper aims to report a research approach that explores how to use evaluations of previous social marketing efforts to assess and guide a new shelterbelt program called Working Tree. By targeting farmers, this new program aims to gain benefits from enhancing and expanding on-farm tree shelterbelts on the Canadian prairies.

Design/methodology/approach

This paper uses a novel method that relies on secondary data from six completed social marketing cases as data for a comparative analysis with the new program. A conceptual framework is proposed and applied. This framework incorporates process and outcome indicators of evaluation, key dimensions of the rational choice theory and proven practices from experience.

Findings

Analysis suggests key parameters of the Working Tree program to be appropriate, with some modifications. However, limitations in the data also point to avenues for future research to deepen the authors’ understanding of assessing a new social marketing program in the prelaunch phase. More research is needed on what works, where and why.

Research limitations/implications

The seven indices are a modest set for comparatives and are not exhaustive. Six selected cases are small samples that are unable to fully reflect the environmental nature of the new program; yet, they contained critical data for the comparative analysis. Financial data are not in constant dollars, which would be needed when further analysis is undertaken.

Practical implications

This paper illustrates the importance of the evaluation stage of the social marketing process. It demonstrates the practicality of being able to effectively draw upon previous evaluations to inform new program investors and social marketers at the prelaunch stage.

Originality/value

The conceptual framework and method present a novel approach to use evaluation data to guide new program funding and initiatives. It is offered with the hope that others might draw upon the ideas presented here and advance them.

Article
Publication date: 2 March 2015

Elizabeth King and Paul Nesbit

The purpose of this paper is to investigate ways to gain deeper understanding of the evaluation challenge by reporting on insights about the impact of a leadership development…

2043

Abstract

Purpose

The purpose of this paper is to investigate ways to gain deeper understanding of the evaluation challenge by reporting on insights about the impact of a leadership development program. It focusses on participants’ reflective post-course analysis of their learning, comparing this to a traditional evaluative analysis. Recently there has been a greater focus on programs to develop leaders who have the requisite cognitive and behavioral complexity to lead in challenging environments. However models for the evaluation of such programs often rely on methodologies that assume learning of specific skills rather than assessment of how well participants are able to cognitively and behaviorally adapt to uncertain and complex environments.

Design/methodology/approach

The leadership development program was evaluated in two stages and the findings compared. Stage 1 elicited responses to the program using a traditional evaluation approach. Stage 2 involved 30 semi-structured interviews with the participants exploring the connections made between their development experience, work environment and approach to challenge.

Findings

Evaluation approaches which focus on assessing reflection about personal learning provide greater detail on learning experience than traditional approaches to evaluation and can increase our understanding of the broader impact of leadership development programs. Current evaluation practices are mostly traditional despite dissatisfaction with outcomes. There are functional and financial benefits flowing from this practice suggesting collusion with denial between the suppliers and purchasers of leadership development and posing a question of causation.

Originality/value

This study supports the use of qualitative evaluation techniques and in particular a focus on post-learning reflection to increase understanding of the impact of leadership development programs. The increased understanding provided by this type of evaluation can play a significant role in both the design of leader development programs and the creation of strategic alignment between business strategy, the purpose of leadership development interventions, learning objectives, program design and program evaluation.

Details

Journal of Management Development, vol. 34 no. 2
Type: Research Article
ISSN: 0262-1711

Keywords

Article
Publication date: 17 August 2015

Roberto Linzalone and Giovanni Schiuma

This paper aims to review Program and Project evaluation Models. The assessment of the Evaluation Model (metaevaluation) is a critical step in Evaluation, as it is at the basis of…

6607

Abstract

Purpose

This paper aims to review Program and Project evaluation Models. The assessment of the Evaluation Model (metaevaluation) is a critical step in Evaluation, as it is at the basis of a successful Program/Project evaluation. A wide and effective review of EMs is a basic, as well as fundamental, support in meta-evaluation that affects positively the overall evaluation efficacy and efficiency. Despite a large number of reviews of EMs and a numerous population of EMs, developed in heterogeneous projects and programs settings, the literature lacks comprehensive collections and reviews of EMs that this paper addresses to provide a basis for the assessment of EMs.

Design/methodology/approach

Through a systematic literature review carried out via the Internet, and querying search engines, several models addressing program or project evaluation have been identified and analyzed. Following a process of normalization of the results gathered, they have been analyzed and compared according to key descriptive issues. They have been, at the end, summarized and rationalized in a comprehensive frame.

Findings

In recent years, evaluation studies have focused on the explanation of the mechanisms that underlie the transformation of projects’ and programs’ outputs into socio-economic effects, arguing that making them explicit allows to understand why a project or program is successful, as well as evaluating its extent. To assess and explain program’s and project’s effects, a basic, although fundamental, role in evaluation is played by the EM. A wide and heterogeneous set of 57 EMs has been identified, defined and framed in typologies, according to a systematic review research.

Originality/value

The approach to the review of EMs and the definition of a boundary of interest for management and economic researchers and practitioners represent an original issue of this paper.

Details

Measuring Business Excellence, vol. 19 no. 3
Type: Research Article
ISSN: 1368-3047

Keywords

Article
Publication date: 9 December 2014

Bernie Pauly, Bruce Wallace and Kathleen Perkin

The purpose of this paper is to provide rationale, methodological guidance and clarity in the use of case study designs and theory driven approaches to evaluation of interventions…

1391

Abstract

Purpose

The purpose of this paper is to provide rationale, methodological guidance and clarity in the use of case study designs and theory driven approaches to evaluation of interventions to end homelessness.

Design/methodology/approach

Using an evaluation of a transitional shelter program aiming to support permanent exits from homelessness as an example, the authors show how case study designs and theory driven evaluation is well suited to the study of the effectiveness of homelessness interventions within the broader socio-political and economic context in which they are being implemented.

Findings

Taking account of the context as part of program evaluation and research on homelessness interventions moves away from blaming programs and individuals for systemic failures to better understanding of how the context influences successes and failures. Case study designs are particularly useful for studying implementation and the context which influences program outcomes. Theory driven evaluations and the use of realist evaluation as an approach can provide a broader understanding of how homelessness interventions work particularly for whom and under what conditions. These methodological and theoretical approaches provide a consistent strategy for evaluating programs aimed at ending homelessness.

Originality/value

There is a need for greater capacity in the homelessness sector to apply approaches to evaluation that take into account the broader socio-political and economic context in which programs are being implemented. Through the use of a case example, the authors provide guidance for application of case study design and theory driven approaches as a strategy for approaches programs aimed at ending homelessness.

Details

Housing, Care and Support, vol. 17 no. 4
Type: Research Article
ISSN: 1460-8790

Keywords

21 – 30 of over 130000