Search results

1 – 10 of over 110000
Click here to view access options
Book part
Publication date: 29 November 2014

Patricia Yee, Andrea Nee and Kamal Hamdan

Through the perspectives of a project director/principal investigator and external evaluator, this chapter explores the methods, strategies, and processes used to design…

Abstract

Through the perspectives of a project director/principal investigator and external evaluator, this chapter explores the methods, strategies, and processes used to design and conduct ongoing, comprehensive evaluation of the Math and Science Teacher Initiatives at California State University, Dominguez Hills. Initiatives include an undergraduate program for students interested in STEM teaching careers, multiple alternative route programs to teacher certification in math and science and a fellowship program for master science teachers. Using a collaborative evaluation framework (O’Sullivan, 2004), the authors highlight the benefits of conducting multiprogram evaluation from a collaborative lens and describe the systematic processes used to engage stakeholders, from the design phase of the evaluation through data collection, analysis, and reporting of participant impact and outcomes. The strengths of the program and evaluation approach, along with specific strategies and methods utilized, will be explored. The chapter will conclude with challenges, lessons learned, and best practices, as well as implications for the field of teacher education and leadership within a STEM context.

Details

Pathways to Excellence: Developing and Cultivating Leaders for the Classroom and Beyond
Type: Book
ISBN: 978-1-78441-116-9

Click here to view access options
Article
Publication date: 1 June 1997

James Guthrie and Linda English

Performance measurement and programme evaluation have been promoted as a central mechanism of recent Australian public sector (APS) reform. Outlines recent reforms in the…

Downloads
3297

Abstract

Performance measurement and programme evaluation have been promoted as a central mechanism of recent Australian public sector (APS) reform. Outlines recent reforms in the APS and identifies links between evaluation and performance information. Identifies the major issue of credibility, when performance information is produced internally and not verified externally. A lack of performance systems and standards can create difficulties for both internal and external programme evaluations. Concludes that: reforms introduced to evaluate performance in the APS were promoted with high expectations which have only partially been fulfilled; the present system is internally focused with a narrow role for evaluation and a lack of credibility because of the independence issue; the present systems associated with the performance approach and its evaluation are not providing enough information to deal with the tough questions of the effectiveness of government programmes. Proposes that a middle ground between internal and external programme evaluation strategies be adopted. This allows the strengths of internal evaluation to be retained. At the same time, it allows the possibility of improving programme evaluation by adding external independent verification and an extended effectiveness role.

Details

International Journal of Public Sector Management, vol. 10 no. 3
Type: Research Article
ISSN: 0951-3558

Keywords

Click here to view access options
Article
Publication date: 1 March 1999

Kevin Brazil

The purpose of this paper is to provide a framework for developing an effective evaluation practice within health care settings. Three features are reviewed; capacity…

Downloads
1316

Abstract

The purpose of this paper is to provide a framework for developing an effective evaluation practice within health care settings. Three features are reviewed; capacity building, the application of evaluation to program activities and the utilization of evaluation recommendations. First, the organizational elements required to establish effective evaluation practice are reviewed emphasizing that an organization’s capacity for evaluation develops over time and in stages. Second, a comprehensive evaluation framework is presented which demonstrates how evaluation practice can be applied to all aspects of a program’s life cycle, thus promoting the scope of evidence‐based decision making within an organization. Finally, factors which influence the adoption of evaluation recommendations by decision makers is reviewed accompanied by strategies to promote the utilization of evaluation recommendations in organization decision making.

Details

Leadership in Health Services, vol. 12 no. 1
Type: Research Article
ISSN: 1366-0756

Keywords

Click here to view access options
Article
Publication date: 17 June 2021

Alicja Pawluczuk, JeongHyun Lee and Attlee Munyaradzi Gamundani

This aim of this paper is to examine the existing gender digital inclusion evaluation guidance and proposes future research recommendations for their evaluation. Despite…

Abstract

Purpose

This aim of this paper is to examine the existing gender digital inclusion evaluation guidance and proposes future research recommendations for their evaluation. Despite modern progress in towards gender equality and women’s empowerment movements, women’s access to, use of and benefits from digital technologies remain limited owing to economic, social and cultural obstacles. Addressing the existing gender digital divide is critical in the global efforts towards the United Nations’ Sustainable Development Goals (SDGs). In recent years, there has been a global increase of gender digital inclusion programmes for girls and women; these programmes serve as a mechanism to learn about gender-specific digital needs and inform future digital inclusion efforts. Evaluation reports of gender digital inclusion programmes can produce critical insights into girls’ and women’s learning needs and aspirations, including what works and what does not when engaging girls and women in information and communications technologies. While there are many accounts highlighting the importance of why gender digital inclusion programmes are important, there is limited knowledge on how to evaluate their impact.

Design/methodology/approach

The thematic analysis suggests three points to consider for the gender digital inclusion programmes evaluation: context-specific understanding of gender digital inclusion programmes; transparency and accountability of the evaluation process and its results; and tensions between evaluation targets and empowerment of evaluation participants.

Findings

The thematic analysis suggests three points of future focus for this evaluation process: context-specific understanding of gender digital inclusion programmes; transparency and accountability of the evaluation process and its results; and tensions between evaluation targets and empowerment of evaluation participants.

Originality/value

The authors propose recommendations for gender digital inclusion evaluation practice and areas for future research.

Details

Digital Policy, Regulation and Governance, vol. 23 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Click here to view access options
Book part
Publication date: 21 May 2012

Brad Astbury

This chapter examines the nature and role of theory in criminal justice evaluation. A distinction between theories of and theories for evaluation is offered to clarify…

Abstract

This chapter examines the nature and role of theory in criminal justice evaluation. A distinction between theories of and theories for evaluation is offered to clarify what is meant by ‘theory’ in the context of contemporary evaluation practice. Theories of evaluation provide a set of prescriptions and principles that can be used to guide the design, conduct and use of evaluation. Theories for evaluation include programme theory and the application of social science theory to understand how and why criminal justice interventions work to generate desired outcomes. The fundamental features of these three types of theory are discussed in detail, with a particular focus on demonstrating their combined value and utility for informing and improving the practice of criminal justice evaluation.

Details

Perspectives on Evaluating Criminal Justice and Corrections
Type: Book
ISBN: 978-1-78052-645-4

Click here to view access options
Article
Publication date: 30 October 2020

Yanina Kowszyk and Frank Vanclay

Improvement in the evaluation methodologies used in the public policy and development fields has increased the amount of evidence-based information available to decision…

Abstract

Purpose

Improvement in the evaluation methodologies used in the public policy and development fields has increased the amount of evidence-based information available to decision makers. This helps firms evaluate the impacts of their social investments. However, it is not clear whether the business sector is interested in using these methods. This paper aims to describe the level of interest in, knowledge of and preferences relating to the impact evaluation of corporate social responsibility (CSR) programs by managers in Latin American companies and foundations.

Design/methodology/approach

A survey of 115 companies and foundations in 15 countries in Latin America was conducted in 2019.

Findings

The results indicated that most respondents believed that quantitative impact evaluation could address concerns about CSR program outcomes. However, monitoring and evaluation were primarily seen to be for tracking program objectives rather than for making strategic decisions about innovations to enhance the achievement of outcomes. Decision-making tended to respond to community demands. The main challenges to increasing the use of impact evaluation were the lack of skills and knowledge of management staff and the methodological complexity of evaluation designs. We conclude that there needs to be increased awareness about: the appropriate understanding of social outcomes; the benefits of evaluation; when impact evaluation is useful; how to prepare an evaluation budget; and the effective use of rigorous evidence to inform program design.

Originality/value

Acceptance by the business sector of quantitative measurement of the social impact of CSR programs will lead to improved outcomes from social investment programs.

Details

Corporate Governance: The International Journal of Business in Society, vol. 21 no. 2
Type: Research Article
ISSN: 1472-0701

Keywords

Click here to view access options
Article
Publication date: 1 January 2006

Fatma Mizikaci

To propose an evaluation model for the quality implementations in higher education through an analysis of quality systems and program evaluation using a systems approach.

Downloads
9586

Abstract

Purpose

To propose an evaluation model for the quality implementations in higher education through an analysis of quality systems and program evaluation using a systems approach.

Design/methodology/approach

Theoretical background, research and practice of the quality systems in higher education and program evaluation are analysed in conjunction with the concepts of systems approach. The analysis leads to a systems approach‐based programevaluation model for quality implementation in higher education.

Findings

The three concepts, quality systems in higher education, program evaluation and systems approach, are found to be consistent and compatible with one another with regard to the goals and organizational structure of the higher education institutions. The proposed evaluation model provides a new perspective for higher education management for the effective and efficient implementation of the quality systems and program improvement.

Research limitations/implications

The implementation of the model in a real university setting is necessary for the clarification of the processes.

Practical implications

The study provides a constructive analysis of higher‐education‐related concepts, and a new dimension of quality systems and program evaluation is developed in the model. The approach comprises three subsystems; “social system”, “technical systems”, and “managerial system”. The evaluation of quality in higher education requires inquiry of the components of the systems.

Originality/value

This paper proposes an innovative evaluation model integrating the systems approach into quality tools. The model is claimed to be the first in integrating the three approaches.

Details

Quality Assurance in Education, vol. 14 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Click here to view access options
Article
Publication date: 1 August 1990

Alison J. Smith and John A. Piper

Management training and development is currently in vogue. Thereappears to be a growing belief in the benefits of investment in trainingand development. When a market is…

Abstract

Management training and development is currently in vogue. There appears to be a growing belief in the benefits of investment in training and development. When a market is buoyant is the time to consider and anticipate the consequences of a future downturn in demand. Such a downturn in demand may demonstrate increasing pressure to “justify” investment in training and development. There is a long established academic body of knowledge on the subject of evaluating training and development. From research evidence and the authors′ experience, the sponsors and the providers of training and development pay scant attention to systematic evaluation of these activities and investments. It is the authors′ contention that when the market′s critical assessment of the value of training and development increases there will be an increasing interest in evaluation. An overview of the history of evaluation traditions is provided and the state of play is commented upon. It is noted that there is a shortfall between theory and practice. It is argued that evaluation is a worthwhile and important activity and ways through the evaluation literature maze and the underpinnings of the activity are demonstrated, especially to management. Similarly the literature on evaluation techniques is reviewed. Tables are provided which demonstrate areas of major activity and identify relatively uncharted waters. This monograph provides a resource whereby practitioners can choose techniques which are appropriate to the activity on which they are engaged. It highlights the process which should be undertaken to make that choice in order that needs of the major stakeholders in the exercise are fully met.

Details

Journal of European Industrial Training, vol. 14 no. 8
Type: Research Article
ISSN: 0309-0590

Keywords

Click here to view access options
Article
Publication date: 10 May 2011

Marco Guerci and Marco Vinante

In recent years, the literature on program evaluation has examined multi‐stakeholder evaluation, but training evaluation models and practices have not generally taken this…

Downloads
6492

Abstract

Purpose

In recent years, the literature on program evaluation has examined multi‐stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap.

Design/methodology/approach

This study identifies intersections between methodologies and approaches of participatory evaluation, and techniques and evaluation tools typically used for training. The study focuses on understanding the evaluation needs of the stakeholder groups typically involved in training programs. A training program financed by the European Social Fund in Italy is studied, using both qualitative and quantitative methodologies (in‐depth interviews and survey research).

Findings

The findings are as follows: first, identification of evaluation dimensions not taken into account in the return on investment training evaluation model of training evaluation, but which are important for satisfying stakeholders' evaluation needs; second, identification of convergences/divergences between stakeholder groups' evaluation needs; and third, identification of latent variables and convergences/divergences in the attribution of importance to them among stakeholders groups.

Research limitations/implications

The main limitations of the research are the following: first, the analysis was based on a single training program; second, the study focused only on the pre‐conditions for designing a stakeholder‐based evaluation plan; and third, the analysis considered the attribution of importance by the stakeholders without considering the development of consistent and reliable indicators.

Practical implications

These results suggest that different stakeholder groups have different evaluation needs and, in operational terms are aware of the convergence and divergence between those needs.

Originality/value

The results of the research are useful in identifying: first, the evaluation elements that all stakeholder groups consider important; second, evaluation elements considered important by one or more stakeholder groups, but not by all of them; and third, latent variables which orient stakeholders groups in training evaluation.

Details

Journal of European Industrial Training, vol. 35 no. 4
Type: Research Article
ISSN: 0309-0590

Keywords

Click here to view access options
Article
Publication date: 24 July 2009

Eduardo Tomé

The purpose of this paper is to analyze critically the most important methods that are used in the evaluation of human resource development (HRD).

Downloads
4000

Abstract

Purpose

The purpose of this paper is to analyze critically the most important methods that are used in the evaluation of human resource development (HRD).

Design/methodology/approach

The approach is to ask two questions: What are the methods available to define the impact of HRD in the economy? How can we evaluate the evaluations that have been made?

Findings

There are two main perspectives to evaluate any program, by results (counting occurrences) and by impacts (calculating the differences the investment made in the society). The first type of method does not find the impact of the program, the second type does.

Research limitations/implications

The analysis is limited by existing studies on HRD. The implications are that the conditions that underline the existence of HRD programs define the type of evaluation that is used.

Originality/value

The results of this paper put the evaluation problem in a new perspective. It explains the difference between methodologies (results and impacts) and scientific fields used (public administration, social policy, HRD, KM, IC, microeconomics, HR economics) by the type of person responsible: public administrator, private manager, HRD expert, knowledge manager, IC expert, microeconomist. The differences between the applications of those methodologies based on the type of funding – private, public, external – are also explained.

Details

Journal of European Industrial Training, vol. 33 no. 6
Type: Research Article
ISSN: 0309-0590

Keywords

1 – 10 of over 110000