Search results

1 – 10 of over 14000
Open Access
Article
Publication date: 16 August 2022

Patricia Lannen and Lisa Jones

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper…

Abstract

Purpose

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper aims to argue that a singular focus on evaluation has limited the ways in which science and research is incorporated into program development, and advocate instead for the use of a new concept, “scientific accompaniment,” to expand and guide program development and testing.

Design/methodology/approach

A heuristic is provided to guide research–practice teams in assessing the program’s developmental stage and level of evidence.

Findings

In an idealized pathway, scientific accompaniment begins early in program development, with ongoing input from both practitioners and researchers, resulting in programs that are both effective and scalable. The heuristic also provides guidance for how to “catch up” on evidence when program development and science utilization are out of sync.

Originality/value

While implementation models provide ideas on improving the use of evidence-based practices, social service programs suffer from a significant lack of research and evaluation. Evaluation resources are typically not used by social service program developers and collaboration with researchers happens late in program development, if at all. There are few resources or models that encourage and guide the use of science and evaluation across program development.

Details

Journal of Children's Services, vol. 17 no. 4
Type: Research Article
ISSN: 1746-6660

Keywords

Open Access
Article
Publication date: 18 September 2017

Sharon Mcculloch

The purpose of this paper is to examine the influence of research evaluation policies and their interpretation on academics’ writing practices in three different higher education…

2376

Abstract

Purpose

The purpose of this paper is to examine the influence of research evaluation policies and their interpretation on academics’ writing practices in three different higher education institutions and across three different disciplines. Specifically, the paper discusses how England’s national research excellence framework (REF) and institutional responses to it shape the decisions academics make about their writing.

Design/methodology/approach

In total, 49 academics at three English universities were interviewed. The academics were from one Science, Technology, Engineering and Mathematics discipline (mathematics), one humanities discipline (history) and one applied discipline (marketing). Repeated semi-structured interviews focussed on different aspects of academics’ writing practices. Heads of departments and administrative staff were also interviewed. Data were coded using the qualitative data analysis software, ATLAS.ti.

Findings

Academics’ ability to succeed in their career was closely tied to their ability to meet quantitative and qualitative targets driven by research evaluation systems, but these were predicated on an unrealistic understanding of knowledge creation. Research evaluation systems limited the epistemic choices available to academics, partly because they pushed academics’ writing towards genres and publication venues that conflicted with disciplinary traditions and partly because they were evenly distributed across institutions and age groups.

Originality/value

This work fills a gap in the literature by offering empirical and qualitative findings on the effects of research evaluation systems in context. It is also one of the only papers to focus on the ways in which individuals’ academic writing practices in particular are shaped by such systems.

Details

Aslib Journal of Information Management, vol. 69 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 27 October 2021

Claartje J. Vinkenburg, Carolin Ossenkop and Helene Schiffbaenker

In this contribution to EDI's professional insights, the authors develop practical and evidence-based recommendations that are developed for bias mitigation, discretion…

2077

Abstract

Purpose

In this contribution to EDI's professional insights, the authors develop practical and evidence-based recommendations that are developed for bias mitigation, discretion elimination and process optimization in panel evaluations and decisions in research funding. An analysis is made of how the expectation of “selling science” adds layers of complexity to the evaluation and decision process. The insights are relevant for optimization of similar processes, including publication, recruitment and selection, tenure and promotion.

Design/methodology/approach

The recommendations are informed by experiences and evidence from commissioned projects with European research funding organizations. The authors distinguish between three aspects of the evaluation process: written applications, enacted performance and group dynamics. Vignettes are provided to set the stage for the analysis of how bias and (lack of) fit to an ideal image makes it easier for some than for others to be funded.

Findings

In research funding decisions, (over)selling science is expected but creates shifting standards for evaluation, resulting in a narrow band of acceptable behavior for applicants. In the authors' recommendations, research funding organizations, evaluators and panel chairs will find practical ideas and levers for process optimization, standardization and customization, in terms of awareness, accountability, biased language, criteria, structure and time.

Originality/value

Showing how “selling science” in research funding adds to the cumulative disadvantage of bias, the authors offer design specifications for interventions to mitigate the negative effects of bias on evaluations and decisions, improve selection habits, eliminate discretion and create a more inclusive process.

Details

Equality, Diversity and Inclusion: An International Journal, vol. 41 no. 9
Type: Research Article
ISSN: 2040-7149

Keywords

Open Access
Article
Publication date: 17 December 2019

Yin Kedong, Shiwei Zhou and Tongtong Xu

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The…

1368

Abstract

Purpose

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The purpose of this paper is to provide theoretical guidance and reference standards for the indicator system design process, laying a solid foundation for the application of the indicator system, by systematically exploring the expert evaluation method to optimize the index system to enhance its credibility and reliability, to improve its resolution and accuracy and reduce its objectivity and randomness.

Design/methodology/approach

The paper is based on system theory and statistics, and it designs the main line of “relevant theoretical analysis – identification of indicators – expert assignment and quality inspection” to achieve the design and optimization of the indicator system. First, the theoretical basis analysis, relevant factor analysis and physical process description are used to clarify the comprehensive evaluation problem and the correlation mechanism. Second, the system structure analysis, hierarchical decomposition and indicator set identification are used to complete the initial establishment of the indicator system. Third, based on expert assignment method, such as Delphi assignments, statistical analysis, t-test and non-parametric test are used to complete the expert assignment quality diagnosis of a single index, the reliability and validity test is used to perform single-index assignment correction and consistency test is used for KENDALL coordination coefficient and F-test multi-indicator expert assignment quality diagnosis.

Findings

Compared with the traditional index system construction method, the optimization process used in the study standardizes the process of index establishment, reduces subjectivity and randomness, and enhances objectivity and scientificity.

Originality/value

The innovation point and value of the paper are embodied in three aspects. First, the system design process of the combined indicator system, the multi-dimensional index screening and system optimization are carried out to ensure that the index system is scientific, reasonable and comprehensive. Second, the experts’ background is comprehensively evaluated. The objectivity and reliability of experts’ assignment are analyzed and improved on the basis of traditional methods. Third, aim at the quality of expert assignment, conduct t-test, non-parametric test of single index, and multi-optimal test of coordination and importance of multiple indicators, enhance experts the practicality of assignment and ensures the quality of expert assignment.

Details

Marine Economics and Management, vol. 2 no. 1
Type: Research Article
ISSN: 2516-158X

Keywords

Abstract

Details

Arts and the Market, vol. 14 no. 1
Type: Research Article
ISSN: 2056-4945

Open Access
Article
Publication date: 10 October 2023

Hans-Peter Degn, Steven Hadley and Louise Ejgod Hansen

During the evaluation of European Capital of Culture (ECoC) Aarhus 2017, the evaluation organisation rethinkIMPACTS 2017 formulated a set of “dilemmas” capturing the main…

Abstract

Purpose

During the evaluation of European Capital of Culture (ECoC) Aarhus 2017, the evaluation organisation rethinkIMPACTS 2017 formulated a set of “dilemmas” capturing the main challenges arising during the design of the ECoC evaluation. This functioned as a framework for the evaluation process. This paper aims to present and discuss the relevance of the “Evaluation Dilemmas Model” as subsequently applied to the Galway 2020 ECoC programme evaluation.

Design/methodology/approach

The paper takes an empirical approach including auto-ethnography and interview data to document and map the dilemmas involved in undertaking an evaluation in two different European cities. Evolved via a process of practice-based research, the article addresses the development of and the arguments for the dilemmas model and considers its potential for wider applicability in the evaluation of large-scale cultural projects.

Findings

The authors conclude that the “Evaluation Dilemmas Model” is a valuable heuristic for considering the endogenous and exogenous issues in cultural evaluation.

Practical implications

The model developed is useful for a wide range of cultural evaluation processes including – but not limited to – European Capitals of Culture.

Originality/value

What has not been addressed in the academic literature is the process of evaluating ECoCs; especially how evaluators often take part in an overall process that is not just about the evaluation but also planning and delivering a project that includes stakeholder management and the development of evaluation criteria, design and methods.

Details

Arts and the Market, vol. 14 no. 1
Type: Research Article
ISSN: 2056-4945

Keywords

Content available
Article
Publication date: 1 March 2017

Louise Kelly and Marina Dorian

The purpose of this conceptual paper is to integrate two previously disparate areas of research: mindfulness and the entrepreneurial process. This present study conceptualizes the…

1011

Abstract

The purpose of this conceptual paper is to integrate two previously disparate areas of research: mindfulness and the entrepreneurial process. This present study conceptualizes the impact of mindfulness on the choices entrepreneurs face. Specifically, the research theorizes the positive effects of mindfulness on the opportunity recognition process, including evaluation of entrepreneurs. Furthermore, we propose that metacognition mediates this relationship, and emotional self-regulation moderates it. This conceptual research also suggests that mindfulness is positively related to the ethical decision-making and opportunity recognition and evaluation. Finally, compassion is proposed as a factor that mediates the relationship between mindfulness and ethical choices in opportunity recognition.

Details

New England Journal of Entrepreneurship, vol. 20 no. 2
Type: Research Article
ISSN: 2574-8904

Keywords

Open Access
Article
Publication date: 20 July 2021

Supadi Supadi, Evitha Soraya, Hamid Muhammad and Nurhasanah Halim

The voice of school principals represents the principals' thoughts and experiences because of their as teachers' evaluator. It provides principals' perception on making sense the…

1779

Abstract

Purpose

The voice of school principals represents the principals' thoughts and experiences because of their as teachers' evaluator. It provides principals' perception on making sense the teacher evaluation. In qualitative research, voice can provide the truth and meaning of principals' experience in teachers evaluation. Their voices in the qualitative interviews are recorded and transcribed into words (Jackson and Mazzei, 2009 and Charteris and Smardon, 2018). By listening to the voices of principals in five provinces in Indonesia, this study, a qualitative research, intends to explore the principals' sensemaking in teacher evaluation.

Design/methodology/approach

This study adopted a qualitative approach, as it was principally concerned with capturing participants' direct experiences in their natural setting as both the teachers' evaluator and school leader (Patton, 2002). The qualitative interview and content analysis were used in this study. The qualitative interview is a type of conversation used to explore informants' experiences and interpretations; in this study, the headmaster (Mishler, 1986; Spradley, 1979 in Hatch, 2002). Researchers used the interviews to uncover the structure of meaning used by principals in making sense the policies that determine teacher evaluations and that are used to carry out evaluations within principal's local authority. The implicit structure can be discovered from direct observation, and the qualitative interviews can bring this meaning to the surface (Hatch, 2002). Therefore, by applying the qualitative interviews, it is expected that information or “unique” interpretations from the principal can be obtained (Stake, 2010). Content analysis is a research technique for making valid conclusions from oral texts into a research context. This analysis can provide new insights, improve researchers' understanding of certain phenomena, or inform other practical actions through the use of verbal data collected in the form of answers to open interview questions (Krippendorff, 2004).

Findings

There are three important findings relating to principals' sensemaking of teachers' evaluation; they are teachers' length of service, principals' perceptions of teacher evaluations and consistency in teacher performance improvement. The principals' perception greatly influences their beliefs and sensemaking of teacher evaluation. In essence, teacher evaluation has not been used to identify high-quality teachers. Principals focus more on the improvement of teachers' welfare than teacher actual performance.

Research limitations/implications

Future research should explore principals' attitude toward the stakeholders when student achievement is not in line with the consistent increase in teachers' performance ratings. And, it is also necessary to investigate the policy makers response to see the consistent improvement in teacher's evaluation is not in line with student achievement. Finally, how to eliminate the culture of joint responsibility without causing frictions in the school environment.

Originality/value

The authors hereby declare that this submission is their own work, and to the best of their knowledge, it contains no materials previously published or written by another person, or substantial proportions of material that have been accepted for the award of any other degree or diploma any other publishers.

Details

International Journal of Educational Management, vol. 35 no. 6
Type: Research Article
ISSN: 0951-354X

Keywords

Open Access
Article
Publication date: 22 May 2020

Hans Englund and Jonas Gerdin

The purpose of this paper is to develop a theoretical model elaborating on the type of conditions that can inhibit (or at least temporarily hold back) “reactive conformance” in…

6282

Abstract

Purpose

The purpose of this paper is to develop a theoretical model elaborating on the type of conditions that can inhibit (or at least temporarily hold back) “reactive conformance” in the wake of an increasing reliance on quantitative performance evaluations of academic research and researchers.

Design/methodology/approach

A qualitative study of a research group at a Swedish university who was recurrently exposed to quantitative performance evaluations of their research activities.

Findings

The empirical findings show how the research group under study exhibited a surprisingly high level of non-compliance and non-conformity in relation to what was deemed important and legitimate by the prevailing performance evaluations. Based on this, we identify four important qualities of pre-existing research/er ideals that seem to make them particularly resilient to an infiltration of an “academic performer ideal,” namely that they are (1) central and since-long established, (2) orthogonal to (i.e. very different from) the academic performer ideal as materialized by the performance measurement system, (3) largely shared within the research group and (4) externally legitimate. The premise is that these qualities form an important basis and motivation for not only criticizing, but also contesting, the academic performer ideal.

Originality/value

Extant research generally finds that the proliferation of quantitatively oriented performance evaluations within academia makes researchers adopt a new type of academic performer ideal which promotes research conformity and superficiality. This study draws upon, and adds to, an emerging literature that has begun to problematize this “reactive conformance-thesis” through identifying four qualities of pre-existing research/er ideals that can inhibit (or at least temporarily hold back) such “reactive research conformance.”

Details

Accounting, Auditing & Accountability Journal, vol. 33 no. 5
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 6 November 2018

Poul Meier Melchiorsen

The purpose of this paper is to acknowledge that there are bibliometric differences between Social Sciences and Humanities (SSH) vs Science, Technology, Engineering and

2721

Abstract

Purpose

The purpose of this paper is to acknowledge that there are bibliometric differences between Social Sciences and Humanities (SSH) vs Science, Technology, Engineering and Mathematics (STEM). It is not so that either SSH or STEM has the right way of doing research or working as a scholarly community. Accordingly, research evaluation is not done properly in one framework based on either a method from SSH or STEM. However, performing research evaluation in two separate frameworks also has disadvantages. One way of scholarly practice may be favored unintentionally in evaluations and in research profiling, which is necessary for job and grant applications.

Design/methodology/approach

In the case study, the authors propose a tool where it may be possible, on one hand, to evaluate across disciplines and on the other hand to keep the multifaceted perspective on the disciplines. Case data describe professors at an SSH and a STEM department at Aalborg University. Ten partial indicators are compiled to build a performance web – a multidimensional description – and a one-dimensional ranking of professors at the two departments. The partial indicators are selected in a way that they should cover a broad variety of scholarly practice and differences in data availability.

Findings

A tool which can be used both for a one-dimensional ranking of researchers and for a multidimensional description is described in the paper.

Research limitations/implications

Limitations of the study are that panel-based evaluation is left out and that the number of partial indicators is set to 10.

Originality/value

The paper describes a new tool that may be an inspiration for practitioners in research analytics.

Details

Journal of Documentation, vol. 75 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 14000