Search results

1 – 10 of over 184000
Article
Publication date: 3 August 2015

Lili-Anne Kihn and Eeva-Mari Ihantola

This paper aims to address the reporting of validation and evaluation criteria in qualitative management accounting studies, which is a topic of critical debate in qualitative…

2660

Abstract

Purpose

This paper aims to address the reporting of validation and evaluation criteria in qualitative management accounting studies, which is a topic of critical debate in qualitative social science research. The objective of this study is to investigate the ways researchers have reported the use of evaluation criteria in qualitative management accounting studies and whether they are associated with certain paradigmatic affiliations.

Design/methodology/approach

Building on the work of Eriksson and Kovalainen [Eriksson and Kovalainen (2008) Qualitative Methods in Business Research. London, Sage], the following three approaches are examined: the adoption of classic concepts of validity and reliability, the use of alternative concepts and the abandonment of general evaluation criteria. Content analysis of 212 case and field studies published during 2006 to February 2015 was conducted to be able to offer an analysis of the most recent frontiers of knowledge.

Findings

The key empirical results of this study provide partial support for the theoretical expectations. They specify and refine Eriksson and Kovalainen’s (2008) classification system, first, by identifying a new approach to evaluation and validation and, second, by showing mixed results on the paradigmatic consistency in the use of evaluation criteria.

Research limitations/implications

This paper is not necessarily exhaustive or representative of all the evaluation criteria developed; the authors focused on the explicit reporting of criteria only and the findings cannot be generalized. Somewhat different results might have been obtained if other journals, other fields of research or a longer period were considered.

Practical implications

The findings of this study enhance the knowledge of alternative approaches and criteria to validation and evaluation. The findings can aid both in the evaluation of management accounting research and in the selection of appropriate evaluation approaches and criteria.

Originality/value

This paper presents a synthesis of the literature (Table I) and new empirical findings that are potentially useful for both academic scholars and practitioners.

Details

Qualitative Research in Accounting & Management, vol. 12 no. 3
Type: Research Article
ISSN: 1176-6093

Keywords

Article
Publication date: 27 April 2020

Xiaojuan Liu, Yu Wei and Zhuojing Zhao

The purpose of this study is to explore informetrics researchers' use of social media for academic activities, their attitudes to the applicability of altmetrics in research

Abstract

Purpose

The purpose of this study is to explore informetrics researchers' use of social media for academic activities, their attitudes to the applicability of altmetrics in research evaluation, the factors influencing their attitudes, and the main opportunities and weaknesses of using altmetrics.

Design/methodology/approach

A survey using a questionnaire was conducted with researchers who participated in the 16th International Conference on Scientometrics and Informetrics ISSI 2017 and a sample of 125 respondents was obtained.

Findings

Progressively more researchers are using social media for different types of academic activities. The study found that many factors affect informetrics researchers' attitudes in different application scenarios with respect to research evaluation. Researchers who have studied altmetrics and who began using social media platforms recently or frequently have more positive attitudes. Academic users and social users have statistically significantly disparate attitudes toward altmetrics in different disciplines and different application scenarios.

Research limitations/implications

Our study only focused on 125 informetrics researchers, who participated in ISSI 2017. We mainly used the questionnaire method, but did not conduct in-depth interviews with the researcher's views.

Originality/value

Informetrics researchers are participants in social media and major researchers of altmetrics. Previous research has examined their use of social media, and this study combines this use of social media with their attitudes to altmetrics to explore the value of altmetrics from a particular perspective. The paper also provides suggestions for the application of altmetrics in research evaluation.

Details

Aslib Journal of Information Management, vol. 72 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 1 September 1999

John Ovretveit

High or low quality is as much a result of how care systems are organised as it is a result of individual clinicians’ performance. Failure to introduce new care organisation or…

Abstract

High or low quality is as much a result of how care systems are organised as it is a result of individual clinicians’ performance. Failure to introduce new care organisation or quality methods which research shows to be effective is as serious an omission as failing to act on poor clinical performance. Managers make many decisions about policies and organisation which affect the quality of care, but they rarely use evaluation research in making these decisions. Such research is difficult to find, produced using many different types of methods which are difficult for non‐experts to assess, often of poor quality, and difficult to translate to the local setting. However, managers can develop an evaluation informed practice, and make greater use of evaluation research in decisions with high cost or risk implications. The paper explains why the model of evidence‐based medicine is not appropriate, proposes instead a practical four‐step approach, and shows how managers can use evaluation in everyday practice.

Details

British Journal of Clinical Governance, vol. 4 no. 3
Type: Research Article
ISSN: 1466-4100

Keywords

Book part
Publication date: 29 September 2015

Murray Saunders, Cristina Sin and Steven Dempster

This chapter will focus on the use of evaluative research in higher education policy analysis. The approach will be illustrated by reference to higher education policy in Scottish…

Abstract

This chapter will focus on the use of evaluative research in higher education policy analysis. The approach will be illustrated by reference to higher education policy in Scottish higher education, with particular reference to the longitudinal evaluative research of support of teaching and learning (T&L) (the Quality Enhancement Framework or QEF). The chapter will discuss the features of the research process which are shaped by evaluation theory. We adopt a theoretical position on policy research which foregrounds the situated experience of policy as a core research focus. Policy is depicted as being underscored by an implicit theory of change which is used to structure and orientate the research focus. The design of the research is characterised by the involvement of potential users of the research output, with implications on the way in which findings are articulated, presented and ultimately used, along with aspects of the evaluative research design. The case study of the QEF will be contextualised, and the intersection between the design features and theoretical approaches, and the use and usability of research outputs, will be established.

Details

Theory and Method in Higher Education Research
Type: Book
ISBN: 978-1-78560-287-0

Book part
Publication date: 4 September 2003

Arch G. Woodside and Marcia Y. Sakai

A meta-evaluation is an assessment of evaluation practices. Meta-evaluations include assessments of validity and usefulness of two or more studies that focus on the same issues…

Abstract

A meta-evaluation is an assessment of evaluation practices. Meta-evaluations include assessments of validity and usefulness of two or more studies that focus on the same issues. Every performance audit is grounded explicitly or implicitly in one or more theories of program evaluation. A deep understanding of alternative theories of program evaluation is helpful to gain clarity about sound auditing practices. We present a review of several theories of program evaluation.

This study includes a meta-evaluation of seven government audits on the efficiency and effectiveness of tourism departments and programs. The seven tourism-marketing performance audits are program evaluations for: Missouri, North Carolina, Tennessee, Minnesota, Australia, and two for Hawaii. The majority of these audits are negative performance assessments. Similarly, although these audits are more useful than none at all, the central conclusion of the meta-evaluation is that most of these audit reports are inadequate assessments. These audits are too limited in the issues examined; not sufficiently grounded in relevant evaluation theory and practice; and fail to include recommendations, that if implemented, would result in substantial increases in performance.

Details

Evaluating Marketing Actions and Outcomes
Type: Book
ISBN: 978-0-76231-046-3

Article
Publication date: 1 August 1999

Erica Wimbush

Training in research and evaluation skills is a frequently expressed need among health promotion practitioners. Research conducted in Scotland among health promotion specialists…

990

Abstract

Training in research and evaluation skills is a frequently expressed need among health promotion practitioners. Research conducted in Scotland among health promotion specialists and their managers showed that training in research on its own would be an insufficient response. In this paper, it is argued that there is a need to develop a broader strategy which seeks to strengthen research capacity within health promotion practice settings, rather than simply offering training to improve practitioners’ research skills. This will help to improve the quality of research conducted in practice settings and contribute to building an evidence base for health promotion. A broader professional development strategy for health promotion research in Scotland is proposed which utilizes a range of learning routes and delivery mechanisms. This will be backed up by the establishment of a broad strategic research partnership which brings together practitioners, researchers and policy‐makers so as to develop a better understanding of what evaluation evidence is needed and who is contributing what.

Details

Health Education, vol. 99 no. 4
Type: Research Article
ISSN: 0965-4283

Keywords

Open Access
Article
Publication date: 16 August 2022

Patricia Lannen and Lisa Jones

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper…

Abstract

Purpose

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper aims to argue that a singular focus on evaluation has limited the ways in which science and research is incorporated into program development, and advocate instead for the use of a new concept, “scientific accompaniment,” to expand and guide program development and testing.

Design/methodology/approach

A heuristic is provided to guide research–practice teams in assessing the program’s developmental stage and level of evidence.

Findings

In an idealized pathway, scientific accompaniment begins early in program development, with ongoing input from both practitioners and researchers, resulting in programs that are both effective and scalable. The heuristic also provides guidance for how to “catch up” on evidence when program development and science utilization are out of sync.

Originality/value

While implementation models provide ideas on improving the use of evidence-based practices, social service programs suffer from a significant lack of research and evaluation. Evaluation resources are typically not used by social service program developers and collaboration with researchers happens late in program development, if at all. There are few resources or models that encourage and guide the use of science and evaluation across program development.

Details

Journal of Children's Services, vol. 17 no. 4
Type: Research Article
ISSN: 1746-6660

Keywords

Article
Publication date: 1 March 1984

Dan Gowler and Karen Legge

Acts of evaluation—the assessment against implicit or explicit criteria of the value of individuals, objects, situations and outcomes—form the core of any high discretion job…

Abstract

Acts of evaluation—the assessment against implicit or explicit criteria of the value of individuals, objects, situations and outcomes—form the core of any high discretion job, where choices have to be made and decisions taken in a world of scarce resources. On a day‐by‐day basis informal evaluations pervade the job of any manager or administrator, but often this is supplemented by formal evaluation research studies—whether technology assessment, investment appraisal, the evaluation of markets and competitors, or in the case of personnel managers—the evaluation of training and development and of organisational change programmes generally. These formal studies include the evaluation studies conducted by “professional” evaluation researchers, such as those engaged in the evaluation of federally funded US social change programmes, those drawn from commercial consultancy agencies or occupying an internal consultant's role within a large company, and those applied social scientists, working in university departments and research institutions interested in issues concerning work system and organisational design . Many articles published in Personnel Review attest to this concern with evaluation research and, indeed, expertise in the conduct of formal evaluation studies has been identified as a major weapon in the armoury of personnel managers who adopt a “conformist innovator” approach to developing their power and influence.

Details

Personnel Review, vol. 13 no. 3
Type: Research Article
ISSN: 0048-3486

Article
Publication date: 23 May 2022

Nedra Ibrahim, Anja Habacha Chaibi and Henda Ben Ghézala

Given the magnitude of the literature, a researcher must be selective of research papers and publications in general. In other words, only papers that meet strict standards of…

Abstract

Purpose

Given the magnitude of the literature, a researcher must be selective of research papers and publications in general. In other words, only papers that meet strict standards of academic integrity and adhere to reliable and credible sources should be referenced. The purpose of this paper is to approach this issue from the prism of scientometrics according to the following research questions: Is it necessary to judge the quality of scientific production? How do we evaluate scientific production? What are the tools to be used in evaluation?

Design/methodology/approach

This paper presents a comparative study of scientometric evaluation practices and tools. A systematic literature review is conducted based on articles published in the field of scientometrics between 1951 and 2022. To analyze data, the authors performed three different aspects of analysis: usage analysis based on classification and comparison between the different scientific evaluation practices, type and level analysis based on classifying different scientometric indicators according to their types and application levels and similarity analysis based on studying the correlation between different quantitative metrics to identify similarity between them.

Findings

This comparative study leads to classify different scientific evaluation practices into externalist and internalist approaches. The authors categorized the different quantitative metrics according to their types (impact, production and composite indicators), their levels of application (micro, meso and macro) and their use (internalist and externalist). Moreover, the similarity analysis has revealed a high correlation between several scientometric indicators such as author h-index, author publications, citations and journal citations.

Originality/value

The interest in this study lies deeply in identifying the strengths and weaknesses of research groups and guides their actions. This evaluation contributes to the advancement of scientific research and to the motivation of researchers. Moreover, this paper can be applied as a complete in-depth guide to help new researchers select appropriate measurements to evaluate scientific production. The selection of evaluation measures is made according to their types, usage and levels of application. Furthermore, our analysis shows the similarity between the different indicators which can limit the overuse of similar measures.

Details

VINE Journal of Information and Knowledge Management Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 1 April 1999

Heike Puchan, Magda Pieczka and Jacquie L’Etang

In the 1990s evaluation has been at the centre of a continued debate in public relations. With the new millennium in sight public relations practitioners, academics and

Abstract

In the 1990s evaluation has been at the centre of a continued debate in public relations. With the new millennium in sight public relations practitioners, academics and professional bodies are not only asking for an intensified discussion, but are also looking for guidelines of best practice which are seen as a necessary step for further development towards professionalism.

Details

Journal of Communication Management, vol. 4 no. 2
Type: Research Article
ISSN: 1363-254X

1 – 10 of over 184000