Search results

1 – 10 of 129
To view the access options for this content please click here
Book part
Publication date: 4 September 2003

Arch G. Woodside and Marcia Y. Sakai

A meta-evaluation is an assessment of evaluation practices. Meta-evaluations include assessments of validity and usefulness of two or more studies that focus on the same…

Abstract

A meta-evaluation is an assessment of evaluation practices. Meta-evaluations include assessments of validity and usefulness of two or more studies that focus on the same issues. Every performance audit is grounded explicitly or implicitly in one or more theories of program evaluation. A deep understanding of alternative theories of program evaluation is helpful to gain clarity about sound auditing practices. We present a review of several theories of program evaluation.

This study includes a meta-evaluation of seven government audits on the efficiency and effectiveness of tourism departments and programs. The seven tourism-marketing performance audits are program evaluations for: Missouri, North Carolina, Tennessee, Minnesota, Australia, and two for Hawaii. The majority of these audits are negative performance assessments. Similarly, although these audits are more useful than none at all, the central conclusion of the meta-evaluation is that most of these audit reports are inadequate assessments. These audits are too limited in the issues examined; not sufficiently grounded in relevant evaluation theory and practice; and fail to include recommendations, that if implemented, would result in substantial increases in performance.

Details

Evaluating Marketing Actions and Outcomes
Type: Book
ISBN: 978-0-76231-046-3

To view the access options for this content please click here
Book part
Publication date: 12 October 2016

Arch G. Woodside, Xin Xia, John C. Crotts and Jeremy C. Clement

The study here helps to fill the gap between the current practices of management performance audits for firms and government agencies. The study advances recent theories…

Abstract

The study here helps to fill the gap between the current practices of management performance audits for firms and government agencies. The study advances recent theories of program evaluation and marketing management auditing. While the application in this chapter refers to government agencies managing destination marketing programs (tourism agencies), the algorithmic model construction is applicable for all management audits. The study applies the perspectives from two streams of theory to describe five relevant activities for managing destination marketing programs: scanning, planning, implementation, assessing, and administering. The analysis proposes impact assessments to improve management performances of DMOs via checklists for assessing the quality of information in tourism-management performance audits. Checklists can serve as a management tool by management performance auditors and by DMO executives to enhance the quality in executing destination marketing programs. A meta-evaluation of 10 tourism management audit reports identifies good and bad practices. The findings indicate that substantial improvements are possible in the practice of DMO’s management performance auditing, and the proposed checklist may ensure both high quality performance audit reports and improved performances in DMO practices.

Details

Making Tough Decisions Well and Badly: Framing, Deciding, Implementing, Assessing
Type: Book
ISBN: 978-1-78635-120-3

Keywords

Abstract

Details

A Developmental and Negotiated Approach to School Self-Evaluation
Type: Book
ISBN: 978-1-78190-704-7

To view the access options for this content please click here
Article
Publication date: 4 November 2014

Leonor Gaspar Pinto

The purpose of this paper is to characterize performance evaluation dynamics developed in Lisbon Municipal Libraries Network (LMLN) over a two-decade period (1989-2009)…

Abstract

Purpose

The purpose of this paper is to characterize performance evaluation dynamics developed in Lisbon Municipal Libraries Network (LMLN) over a two-decade period (1989-2009), using a specific model and tools of (meta-)analysis.

Design/methodology/approach

Based on a methodology eminently qualitative supported by a combination of research methods (literature review, construction and application of conceptual models for analysis and case study), the author examined LMLN's performance evaluation progress, focusing on the diachronic study of evaluative processes – the performance evaluation dynamics.

Findings

Between 1989 and 2009, LMLN developed four main performance evaluation dynamics. The most significant results that emerged from the examination of these dynamics relate to the following elements of the model of analysis Dynamics and Impacts of Library Performance Evaluation (DILPE): evaluation objects and methods; organization; and dissemination of results. The study also emphasized the importance of some factors to dynamics sustainability: the presence of a permanent coordination team, in the direct dependency of the head of library services, with the right competences; the existence of a culture of assessment; and the commitment of leadership with performance evaluation.

Originality/value

The meta-evaluative approach and particularly the focus on the long-term development of evaluative theories and practices, contributes to the enlargement of the international corpus on library performance evaluation. On the other hand, the analytical model and conceptual tools created might be useful to other researchers/practitioners willing to meta-evaluate library performance evaluation dynamics.

To view the access options for this content please click here
Article
Publication date: 1 February 2000

Pia Borlund

This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems…

Abstract

This paper presents a set of basic components which constitutes the experimental setting intended for the evaluation of interactive information retrieval (IIR) systems, the aim of which is to facilitate evaluation of IIR systems in a way which is as close as possible to realistic IR processes. The experimental setting consists of three components: (1) the involvement of potential users as test persons; (2) the application of dynamic and individual information needs; and (3) the use of multidimensional and dynamic relevance judgements. Hidden under the information need component is the essential central sub‐component, the simulated work task situation, the tool that triggers the (simulated) dynamic information needs. This paper also reports on the empirical findings of the metaevaluation of the application of this sub‐component, the purpose of which is to discover whether the application of simulated work task situations to future evaluation of IIR systems can be recommended. Investigations are carried out to determine whether any search behavioural differences exist between test persons‘ treatment of their own real information needs versus simulated information needs. The hypothesis is that if no difference exists one can correctly substitute real information needs with simulated information needs through the application of simulated work task situations. The empirical results of the meta‐evaluation provide positive evidence for the application of simulated work task situations to the evaluation of IIR systems. The results also indicate that tailoring work task situations to the group of test persons is important in motivating them. Furthermore, the results of the evaluation show that different versions of semantic openness of the simulated situations make no difference to the test persons’ search treatment.

Details

Journal of Documentation, vol. 56 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Content available
Article
Publication date: 31 May 2021

Jennifer L. Thoegersen and Pia Borlund

The purpose of this paper is to report a study of how research literature addresses researchers' attitudes toward data repository use. In particular, the authors are…

Abstract

Purpose

The purpose of this paper is to report a study of how research literature addresses researchers' attitudes toward data repository use. In particular, the authors are interested in how the term data sharing is defined, how data repository use is reported and whether there is need for greater clarity and specificity of terminology.

Design/methodology/approach

To study how the literature addresses researcher data repository use, relevant studies were identified by searching Library Information Science and Technology Abstracts, Library and Information Science Source, Thomas Reuters' Web of Science Core Collection and Scopus. A total of 62 studies were identified for inclusion in this meta-evaluation.

Findings

The study shows a need for greater clarity and consistency in the use of the term data sharing in future studies to better understand the phenomenon and allow for cross-study comparisons. Furthermore, most studies did not address data repository use specifically. In most analyzed studies, it was not possible to segregate results relating to sharing via public data repositories from other types of sharing. When sharing in public repositories was mentioned, the prevalence of repository use varied significantly.

Originality/value

Researchers' data sharing is of great interest to library and information science research and practice to inform academic libraries that are implementing data services to support these researchers. This study explores how the literature approaches this issue, especially the use of data repositories, the use of which is strongly encouraged. This paper identifies the potential for additional study focused on this area.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 9 May 2016

Pia Borlund

The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information…

Abstract

Purpose

The purpose of this paper is to report a study of how the test instrument of a simulated work task situation is used in empirical evaluations of interactive information retrieval (IIR) and reported in the research literature. In particular, the author is interested to learn whether the requirements of how to employ simulated work task situations are followed, and whether these requirements call for further highlighting and refinement.

Design/methodology/approach

In order to study how simulated work task situations are used, the research literature in question is identified. This is done partly via citation analysis by use of Web of Science®, and partly by systematic search of online repositories. On this basis, 67 individual publications were identified and they constitute the sample of analysis.

Findings

The analysis reveals a need for clarifications of how to use simulated work task situations in IIR evaluations. In particular, with respect to the design and creation of realistic simulated work task situations. There is a lack of tailoring of the simulated work task situations to the test participants. Likewise, the requirement to include the test participants’ personal information needs is neglected. Further, there is a need to add and emphasise a requirement to depict the used simulated work task situations when reporting the IIR studies.

Research limitations/implications

Insight about the use of simulated work task situations has implications for test design of IIR studies and hence the knowledge base generated on the basis of such studies.

Originality/value

Simulated work task situations are widely used in IIR studies, and the present study is the first comprehensive study of the intended and unintended use of this test instrument since its introduction in the late 1990’s. The paper addresses the need to carefully design and tailor simulated work task situations to suit the test participants in order to obtain the intended authentic and realistic IIR under study.

Details

Journal of Documentation, vol. 72 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Book part
Publication date: 25 July 2008

Arch G. Woodside and Marcia Y. Sakai

The present chapter includes a case study that describes and analyzes three performance audit reports over a three decade period for one U.S. state government's…

Abstract

The present chapter includes a case study that describes and analyzes three performance audit reports over a three decade period for one U.S. state government's destination management organization's (DMO) actions and outcomes. This report extends prior studies (Woodside & Sakai, 2001, 2003) that support two conclusions: (1) the available independent performance audits of DMOs’ actions and outcomes indicate that frequently DMOs perform poorly and fail to meaningfully assess the impacts of their own actions and (2) the audits themselves are shallow and often fail to provide information on DMOs’ actions and outcomes relating to these organizations largest marketing expenditures. The chapter calls for embracing a strategy shift in designing program evaluations by both government departments responsible for managing destinations’ tourism marketing programs and all government auditing agencies in conducting future management performance audits. The chapter offers a “tourism performance audit template” as a tool for both strategic planning by destination management organizations and for evaluating DMOs’ planning and implementing strategies. The chapter includes an appendix – a training exercise in using the audit template and invites the reader to download a tourism performance audit report of a destination marketing organization and to apply the template after reading the report.

Details

Advances in Culture, Tourism and Hospitality Research
Type: Book
ISBN: 978-1-84950-522-2

To view the access options for this content please click here
Article
Publication date: 1 April 1991

Vanja Orlans

Problems arise in attempting to evaluate Employee AssistanceProgrammes (EAP) in the widest sense. Terms such as“evaluation”, and “benefits” are regarded aspotentially…

Abstract

Problems arise in attempting to evaluate Employee Assistance Programmes (EAP) in the widest sense. Terms such as “evaluation”, and “benefits” are regarded as potentially complex, as is the adequate definition of what constitutes “employee assistance”. Studies concerned both with alcohol programmes and with stress management are reviewed; specific problems are highlighted. Methodology and the appropriateness of the traditional scientific method are much discussed. “Meta‐evaluation” is proposed to run concurrently with the unravelling of methodological questions in order to address the interfacing of programmes with other sections of the organisation, and the extent to which environmental and organisational factors rather than individuals are targeted for change.

Details

Employee Councelling Today, vol. 3 no. 4
Type: Research Article
ISSN: 0955-8217

Keywords

To view the access options for this content please click here
Article
Publication date: 17 August 2015

Roberto Linzalone and Giovanni Schiuma

This paper aims to review Program and Project evaluation Models. The assessment of the Evaluation Model (metaevaluation) is a critical step in Evaluation, as it is at the…

Abstract

Purpose

This paper aims to review Program and Project evaluation Models. The assessment of the Evaluation Model (metaevaluation) is a critical step in Evaluation, as it is at the basis of a successful Program/Project evaluation. A wide and effective review of EMs is a basic, as well as fundamental, support in meta-evaluation that affects positively the overall evaluation efficacy and efficiency. Despite a large number of reviews of EMs and a numerous population of EMs, developed in heterogeneous projects and programs settings, the literature lacks comprehensive collections and reviews of EMs that this paper addresses to provide a basis for the assessment of EMs.

Design/methodology/approach

Through a systematic literature review carried out via the Internet, and querying search engines, several models addressing program or project evaluation have been identified and analyzed. Following a process of normalization of the results gathered, they have been analyzed and compared according to key descriptive issues. They have been, at the end, summarized and rationalized in a comprehensive frame.

Findings

In recent years, evaluation studies have focused on the explanation of the mechanisms that underlie the transformation of projects’ and programs’ outputs into socio-economic effects, arguing that making them explicit allows to understand why a project or program is successful, as well as evaluating its extent. To assess and explain program’s and project’s effects, a basic, although fundamental, role in evaluation is played by the EM. A wide and heterogeneous set of 57 EMs has been identified, defined and framed in typologies, according to a systematic review research.

Originality/value

The approach to the review of EMs and the definition of a boundary of interest for management and economic researchers and practitioners represent an original issue of this paper.

Details

Measuring Business Excellence, vol. 19 no. 3
Type: Research Article
ISSN: 1368-3047

Keywords

1 – 10 of 129