Search results

1 – 10 of over 130000
Book part
Publication date: 29 November 2014

Patricia Yee, Andrea Nee and Kamal Hamdan

Through the perspectives of a project director/principal investigator and external evaluator, this chapter explores the methods, strategies, and processes used to design and…

Abstract

Through the perspectives of a project director/principal investigator and external evaluator, this chapter explores the methods, strategies, and processes used to design and conduct ongoing, comprehensive evaluation of the Math and Science Teacher Initiatives at California State University, Dominguez Hills. Initiatives include an undergraduate program for students interested in STEM teaching careers, multiple alternative route programs to teacher certification in math and science and a fellowship program for master science teachers. Using a collaborative evaluation framework (O’Sullivan, 2004), the authors highlight the benefits of conducting multiprogram evaluation from a collaborative lens and describe the systematic processes used to engage stakeholders, from the design phase of the evaluation through data collection, analysis, and reporting of participant impact and outcomes. The strengths of the program and evaluation approach, along with specific strategies and methods utilized, will be explored. The chapter will conclude with challenges, lessons learned, and best practices, as well as implications for the field of teacher education and leadership within a STEM context.

Details

Pathways to Excellence: Developing and Cultivating Leaders for the Classroom and Beyond
Type: Book
ISBN: 978-1-78441-116-9

Article
Publication date: 1 June 1997

James Guthrie and Linda English

Performance measurement and programme evaluation have been promoted as a central mechanism of recent Australian public sector (APS) reform. Outlines recent reforms in the APS and…

3399

Abstract

Performance measurement and programme evaluation have been promoted as a central mechanism of recent Australian public sector (APS) reform. Outlines recent reforms in the APS and identifies links between evaluation and performance information. Identifies the major issue of credibility, when performance information is produced internally and not verified externally. A lack of performance systems and standards can create difficulties for both internal and external programme evaluations. Concludes that: reforms introduced to evaluate performance in the APS were promoted with high expectations which have only partially been fulfilled; the present system is internally focused with a narrow role for evaluation and a lack of credibility because of the independence issue; the present systems associated with the performance approach and its evaluation are not providing enough information to deal with the tough questions of the effectiveness of government programmes. Proposes that a middle ground between internal and external programme evaluation strategies be adopted. This allows the strengths of internal evaluation to be retained. At the same time, it allows the possibility of improving programme evaluation by adding external independent verification and an extended effectiveness role.

Details

International Journal of Public Sector Management, vol. 10 no. 3
Type: Research Article
ISSN: 0951-3558

Keywords

Article
Publication date: 1 March 1999

Kevin Brazil

The purpose of this paper is to provide a framework for developing an effective evaluation practice within health care settings. Three features are reviewed; capacity building…

1421

Abstract

The purpose of this paper is to provide a framework for developing an effective evaluation practice within health care settings. Three features are reviewed; capacity building, the application of evaluation to program activities and the utilization of evaluation recommendations. First, the organizational elements required to establish effective evaluation practice are reviewed emphasizing that an organization’s capacity for evaluation develops over time and in stages. Second, a comprehensive evaluation framework is presented which demonstrates how evaluation practice can be applied to all aspects of a program’s life cycle, thus promoting the scope of evidence‐based decision making within an organization. Finally, factors which influence the adoption of evaluation recommendations by decision makers is reviewed accompanied by strategies to promote the utilization of evaluation recommendations in organization decision making.

Details

Leadership in Health Services, vol. 12 no. 1
Type: Research Article
ISSN: 1366-0756

Keywords

Book part
Publication date: 21 May 2012

Brad Astbury

This chapter examines the nature and role of theory in criminal justice evaluation. A distinction between theories of and theories for evaluation is offered to clarify what is…

Abstract

This chapter examines the nature and role of theory in criminal justice evaluation. A distinction between theories of and theories for evaluation is offered to clarify what is meant by ‘theory’ in the context of contemporary evaluation practice. Theories of evaluation provide a set of prescriptions and principles that can be used to guide the design, conduct and use of evaluation. Theories for evaluation include programme theory and the application of social science theory to understand how and why criminal justice interventions work to generate desired outcomes. The fundamental features of these three types of theory are discussed in detail, with a particular focus on demonstrating their combined value and utility for informing and improving the practice of criminal justice evaluation.

Details

Perspectives on Evaluating Criminal Justice and Corrections
Type: Book
ISBN: 978-1-78052-645-4

Article
Publication date: 1 January 2006

Fatma Mizikaci

To propose an evaluation model for the quality implementations in higher education through an analysis of quality systems and program evaluation using a systems approach.

10511

Abstract

Purpose

To propose an evaluation model for the quality implementations in higher education through an analysis of quality systems and program evaluation using a systems approach.

Design/methodology/approach

Theoretical background, research and practice of the quality systems in higher education and program evaluation are analysed in conjunction with the concepts of systems approach. The analysis leads to a systems approach‐based programevaluation model for quality implementation in higher education.

Findings

The three concepts, quality systems in higher education, program evaluation and systems approach, are found to be consistent and compatible with one another with regard to the goals and organizational structure of the higher education institutions. The proposed evaluation model provides a new perspective for higher education management for the effective and efficient implementation of the quality systems and program improvement.

Research limitations/implications

The implementation of the model in a real university setting is necessary for the clarification of the processes.

Practical implications

The study provides a constructive analysis of higher‐education‐related concepts, and a new dimension of quality systems and program evaluation is developed in the model. The approach comprises three subsystems; “social system”, “technical systems”, and “managerial system”. The evaluation of quality in higher education requires inquiry of the components of the systems.

Originality/value

This paper proposes an innovative evaluation model integrating the systems approach into quality tools. The model is claimed to be the first in integrating the three approaches.

Details

Quality Assurance in Education, vol. 14 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 1 August 1990

Alison J. Smith and John A. Piper

Management training and development is currently in vogue. Thereappears to be a growing belief in the benefits of investment in trainingand development. When a market is buoyant…

Abstract

Management training and development is currently in vogue. There appears to be a growing belief in the benefits of investment in training and development. When a market is buoyant is the time to consider and anticipate the consequences of a future downturn in demand. Such a downturn in demand may demonstrate increasing pressure to “justify” investment in training and development. There is a long established academic body of knowledge on the subject of evaluating training and development. From research evidence and the authors′ experience, the sponsors and the providers of training and development pay scant attention to systematic evaluation of these activities and investments. It is the authors′ contention that when the market′s critical assessment of the value of training and development increases there will be an increasing interest in evaluation. An overview of the history of evaluation traditions is provided and the state of play is commented upon. It is noted that there is a shortfall between theory and practice. It is argued that evaluation is a worthwhile and important activity and ways through the evaluation literature maze and the underpinnings of the activity are demonstrated, especially to management. Similarly the literature on evaluation techniques is reviewed. Tables are provided which demonstrate areas of major activity and identify relatively uncharted waters. This monograph provides a resource whereby practitioners can choose techniques which are appropriate to the activity on which they are engaged. It highlights the process which should be undertaken to make that choice in order that needs of the major stakeholders in the exercise are fully met.

Details

Journal of European Industrial Training, vol. 14 no. 8
Type: Research Article
ISSN: 0309-0590

Keywords

Article
Publication date: 10 May 2011

Marco Guerci and Marco Vinante

In recent years, the literature on program evaluation has examined multi‐stakeholder evaluation, but training evaluation models and practices have not generally taken this problem…

6769

Abstract

Purpose

In recent years, the literature on program evaluation has examined multi‐stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap.

Design/methodology/approach

This study identifies intersections between methodologies and approaches of participatory evaluation, and techniques and evaluation tools typically used for training. The study focuses on understanding the evaluation needs of the stakeholder groups typically involved in training programs. A training program financed by the European Social Fund in Italy is studied, using both qualitative and quantitative methodologies (in‐depth interviews and survey research).

Findings

The findings are as follows: first, identification of evaluation dimensions not taken into account in the return on investment training evaluation model of training evaluation, but which are important for satisfying stakeholders' evaluation needs; second, identification of convergences/divergences between stakeholder groups' evaluation needs; and third, identification of latent variables and convergences/divergences in the attribution of importance to them among stakeholders groups.

Research limitations/implications

The main limitations of the research are the following: first, the analysis was based on a single training program; second, the study focused only on the pre‐conditions for designing a stakeholder‐based evaluation plan; and third, the analysis considered the attribution of importance by the stakeholders without considering the development of consistent and reliable indicators.

Practical implications

These results suggest that different stakeholder groups have different evaluation needs and, in operational terms are aware of the convergence and divergence between those needs.

Originality/value

The results of the research are useful in identifying: first, the evaluation elements that all stakeholder groups consider important; second, evaluation elements considered important by one or more stakeholder groups, but not by all of them; and third, latent variables which orient stakeholders groups in training evaluation.

Details

Journal of European Industrial Training, vol. 35 no. 4
Type: Research Article
ISSN: 0309-0590

Keywords

Article
Publication date: 24 July 2009

Eduardo Tomé

The purpose of this paper is to analyze critically the most important methods that are used in the evaluation of human resource development (HRD).

4144

Abstract

Purpose

The purpose of this paper is to analyze critically the most important methods that are used in the evaluation of human resource development (HRD).

Design/methodology/approach

The approach is to ask two questions: What are the methods available to define the impact of HRD in the economy? How can we evaluate the evaluations that have been made?

Findings

There are two main perspectives to evaluate any program, by results (counting occurrences) and by impacts (calculating the differences the investment made in the society). The first type of method does not find the impact of the program, the second type does.

Research limitations/implications

The analysis is limited by existing studies on HRD. The implications are that the conditions that underline the existence of HRD programs define the type of evaluation that is used.

Originality/value

The results of this paper put the evaluation problem in a new perspective. It explains the difference between methodologies (results and impacts) and scientific fields used (public administration, social policy, HRD, KM, IC, microeconomics, HR economics) by the type of person responsible: public administrator, private manager, HRD expert, knowledge manager, IC expert, microeconomist. The differences between the applications of those methodologies based on the type of funding – private, public, external – are also explained.

Details

Journal of European Industrial Training, vol. 33 no. 6
Type: Research Article
ISSN: 0309-0590

Keywords

Article
Publication date: 22 June 2022

Nil Paşaoğluları Şahin and Özlem Olgaç Türker

This study aims to explore ways of developing, implementing and validating a new framework and criteria for self-evaluation of programme curricula, with specific reference to the…

Abstract

Purpose

This study aims to explore ways of developing, implementing and validating a new framework and criteria for self-evaluation of programme curricula, with specific reference to the quality assurance certification process in a particular case.

Design/methodology/approach

The framework is developed using a case study research methodology and implemented based on criteria extraction through the triangulation of indicators achieved from internal goals and external directives.

Findings

The findings reveal that this is an improvement-led framework that can be adapted to other contexts during the quality assurance processes to facilitate periodical programme evaluations with a focus on the curriculum.

Research limitations/implications

The proposed practical tool is developed for the programme evaluation process with a curricular focus during the quality assurance certification process of an interior architectural programme while enlightening the processes for the periodical self-evaluations of other institutions. The framework depends on both institution-specific internal and external directives and fulfilling curriculum-related criteria of the European Standards and Guidelines for Quality Assurance.

Originality/value

Within quality assurance processes, despite external quality assurance mechanisms, there is a shortage of self-evaluation tools for internal quality assurance procedures, which allow the dissociation of programme-specific qualities. The proposed framework is developed as an example of a self-tailored internal quality assurance tool and process for educational institutions to reveal their unique qualities.

Details

Quality Assurance in Education, vol. 30 no. 4
Type: Research Article
ISSN: 0968-4883

Keywords

Article
Publication date: 17 June 2021

Alicja Pawluczuk, JeongHyun Lee and Attlee Munyaradzi Gamundani

This aim of this paper is to examine the existing gender digital inclusion evaluation guidance and proposes future research recommendations for their evaluation. Despite modern…

1005

Abstract

Purpose

This aim of this paper is to examine the existing gender digital inclusion evaluation guidance and proposes future research recommendations for their evaluation. Despite modern progress in towards gender equality and women’s empowerment movements, women’s access to, use of and benefits from digital technologies remain limited owing to economic, social and cultural obstacles. Addressing the existing gender digital divide is critical in the global efforts towards the United Nations’ Sustainable Development Goals (SDGs). In recent years, there has been a global increase of gender digital inclusion programmes for girls and women; these programmes serve as a mechanism to learn about gender-specific digital needs and inform future digital inclusion efforts. Evaluation reports of gender digital inclusion programmes can produce critical insights into girls’ and women’s learning needs and aspirations, including what works and what does not when engaging girls and women in information and communications technologies. While there are many accounts highlighting the importance of why gender digital inclusion programmes are important, there is limited knowledge on how to evaluate their impact.

Design/methodology/approach

The thematic analysis suggests three points to consider for the gender digital inclusion programmes evaluation: context-specific understanding of gender digital inclusion programmes; transparency and accountability of the evaluation process and its results; and tensions between evaluation targets and empowerment of evaluation participants.

Findings

The thematic analysis suggests three points of future focus for this evaluation process: context-specific understanding of gender digital inclusion programmes; transparency and accountability of the evaluation process and its results; and tensions between evaluation targets and empowerment of evaluation participants.

Originality/value

The authors propose recommendations for gender digital inclusion evaluation practice and areas for future research.

Details

Digital Policy, Regulation and Governance, vol. 23 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

1 – 10 of over 130000