Search results

1 – 10 of over 14000
Open Access
Article
Publication date: 16 August 2022

Patricia Lannen and Lisa Jones

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper…

Abstract

Purpose

Calls for the development and dissemination of evidence-based programs to support children and families have been increasing for decades, but progress has been slow. This paper aims to argue that a singular focus on evaluation has limited the ways in which science and research is incorporated into program development, and advocate instead for the use of a new concept, “scientific accompaniment,” to expand and guide program development and testing.

Design/methodology/approach

A heuristic is provided to guide research–practice teams in assessing the program’s developmental stage and level of evidence.

Findings

In an idealized pathway, scientific accompaniment begins early in program development, with ongoing input from both practitioners and researchers, resulting in programs that are both effective and scalable. The heuristic also provides guidance for how to “catch up” on evidence when program development and science utilization are out of sync.

Originality/value

While implementation models provide ideas on improving the use of evidence-based practices, social service programs suffer from a significant lack of research and evaluation. Evaluation resources are typically not used by social service program developers and collaboration with researchers happens late in program development, if at all. There are few resources or models that encourage and guide the use of science and evaluation across program development.

Details

Journal of Children's Services, vol. 17 no. 4
Type: Research Article
ISSN: 1746-6660

Keywords

Open Access
Article
Publication date: 13 September 2022

Sara Bolduc, John Knox and E. Barrett Ristroph

This article considers how the evaluation of research teams can better account for the challenges of transdisciplinarity, including their larger team size and more diverse and…

Abstract

Purpose

This article considers how the evaluation of research teams can better account for the challenges of transdisciplinarity, including their larger team size and more diverse and permeable membership, as well as the tensions between institutional pressures on individuals to publish and team goals.

Design/methodology/approach

An evaluation team was retained from 2015 to 2020 to conduct a comprehensive external evaluation of a five-year EPSCoR-funded program undertaken by a transdisciplinary research team. The formative portion of the evaluation involved monitoring the program’s developmental progress, while the summative portion tracked observable program outputs and outcomes as evidence of progress toward short- and long-term goals. The evaluation team systematically reviewed internal assessments and gathered additional data for an external assessment via periodic participation in team meetings, participant interviews and an online formative team survey (starting in Year 2).

Findings

Survey participants had a better understanding of the project’s “Goals and Vision” compared to other aspects. “Work Roles,” and particularly the timeliness of decision-making, were perceived to be a “Big Problem,” specifically in regard to heavy travel by key managers/leadership. For “Communication Channels,” Year 2 tensions included differing views on the extent to which management should be collaborative versus “hierarchical.” These concerns about communication demonstrate that differences in language, culture or status impact the efficiency and working relationship of the team. “Authorship Credit/Intellectual Property” was raised most consistently each year as an area of concern.

Originality/value

The study involves the use of a unique survey approach.

Details

Higher Education Evaluation and Development, vol. 17 no. 2
Type: Research Article
ISSN: 2514-5789

Keywords

Open Access
Article
Publication date: 18 September 2017

Sharon Mcculloch

The purpose of this paper is to examine the influence of research evaluation policies and their interpretation on academics’ writing practices in three different higher education…

2326

Abstract

Purpose

The purpose of this paper is to examine the influence of research evaluation policies and their interpretation on academics’ writing practices in three different higher education institutions and across three different disciplines. Specifically, the paper discusses how England’s national research excellence framework (REF) and institutional responses to it shape the decisions academics make about their writing.

Design/methodology/approach

In total, 49 academics at three English universities were interviewed. The academics were from one Science, Technology, Engineering and Mathematics discipline (mathematics), one humanities discipline (history) and one applied discipline (marketing). Repeated semi-structured interviews focussed on different aspects of academics’ writing practices. Heads of departments and administrative staff were also interviewed. Data were coded using the qualitative data analysis software, ATLAS.ti.

Findings

Academics’ ability to succeed in their career was closely tied to their ability to meet quantitative and qualitative targets driven by research evaluation systems, but these were predicated on an unrealistic understanding of knowledge creation. Research evaluation systems limited the epistemic choices available to academics, partly because they pushed academics’ writing towards genres and publication venues that conflicted with disciplinary traditions and partly because they were evenly distributed across institutions and age groups.

Originality/value

This work fills a gap in the literature by offering empirical and qualitative findings on the effects of research evaluation systems in context. It is also one of the only papers to focus on the ways in which individuals’ academic writing practices in particular are shaped by such systems.

Details

Aslib Journal of Information Management, vol. 69 no. 5
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 1 December 2006

John Morgan and Thomas Davies

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are…

Abstract

This paper reports results of analyses made at an all-female Gulf Arab university measuring the nature and extent of biases in students' evaluation of faculty. Comparisons are made with research reporting the nature of similar relationships in North America. Two issues are investigated: 1) What variables (if any) bias faculty evaluation results at an all-female Arab university? 2) Are biasing variables different in nature or magnitude to those reported at North America universities? Using the population of 13,300 faculty evaluation records collected over two school years at Zayed University, correlations of faculty evaluation results to nine potentially biasing factors are made. Results show biases to faculty evaluation results do exist. However, biases are small, and strikingly similar in nature to those reported at North American universities.

Details

Learning and Teaching in Higher Education: Gulf Perspectives, vol. 3 no. 2
Type: Research Article
ISSN: 2077-5504

Abstract

Details

Arts and the Market, vol. 14 no. 1
Type: Research Article
ISSN: 2056-4945

Open Access
Article
Publication date: 6 November 2018

Poul Meier Melchiorsen

The purpose of this paper is to acknowledge that there are bibliometric differences between Social Sciences and Humanities (SSH) vs Science, Technology, Engineering and…

2673

Abstract

Purpose

The purpose of this paper is to acknowledge that there are bibliometric differences between Social Sciences and Humanities (SSH) vs Science, Technology, Engineering and Mathematics (STEM). It is not so that either SSH or STEM has the right way of doing research or working as a scholarly community. Accordingly, research evaluation is not done properly in one framework based on either a method from SSH or STEM. However, performing research evaluation in two separate frameworks also has disadvantages. One way of scholarly practice may be favored unintentionally in evaluations and in research profiling, which is necessary for job and grant applications.

Design/methodology/approach

In the case study, the authors propose a tool where it may be possible, on one hand, to evaluate across disciplines and on the other hand to keep the multifaceted perspective on the disciplines. Case data describe professors at an SSH and a STEM department at Aalborg University. Ten partial indicators are compiled to build a performance web – a multidimensional description – and a one-dimensional ranking of professors at the two departments. The partial indicators are selected in a way that they should cover a broad variety of scholarly practice and differences in data availability.

Findings

A tool which can be used both for a one-dimensional ranking of researchers and for a multidimensional description is described in the paper.

Research limitations/implications

Limitations of the study are that panel-based evaluation is left out and that the number of partial indicators is set to 10.

Originality/value

The paper describes a new tool that may be an inspiration for practitioners in research analytics.

Details

Journal of Documentation, vol. 75 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 27 October 2021

Claartje J. Vinkenburg, Carolin Ossenkop and Helene Schiffbaenker

In this contribution to EDI's professional insights, the authors develop practical and evidence-based recommendations that are developed for bias mitigation, discretion…

1998

Abstract

Purpose

In this contribution to EDI's professional insights, the authors develop practical and evidence-based recommendations that are developed for bias mitigation, discretion elimination and process optimization in panel evaluations and decisions in research funding. An analysis is made of how the expectation of “selling science” adds layers of complexity to the evaluation and decision process. The insights are relevant for optimization of similar processes, including publication, recruitment and selection, tenure and promotion.

Design/methodology/approach

The recommendations are informed by experiences and evidence from commissioned projects with European research funding organizations. The authors distinguish between three aspects of the evaluation process: written applications, enacted performance and group dynamics. Vignettes are provided to set the stage for the analysis of how bias and (lack of) fit to an ideal image makes it easier for some than for others to be funded.

Findings

In research funding decisions, (over)selling science is expected but creates shifting standards for evaluation, resulting in a narrow band of acceptable behavior for applicants. In the authors' recommendations, research funding organizations, evaluators and panel chairs will find practical ideas and levers for process optimization, standardization and customization, in terms of awareness, accountability, biased language, criteria, structure and time.

Originality/value

Showing how “selling science” in research funding adds to the cumulative disadvantage of bias, the authors offer design specifications for interventions to mitigate the negative effects of bias on evaluations and decisions, improve selection habits, eliminate discretion and create a more inclusive process.

Details

Equality, Diversity and Inclusion: An International Journal, vol. 41 no. 9
Type: Research Article
ISSN: 2040-7149

Keywords

Open Access
Article
Publication date: 22 May 2020

Hans Englund and Jonas Gerdin

The purpose of this paper is to develop a theoretical model elaborating on the type of conditions that can inhibit (or at least temporarily hold back) “reactive conformance” in…

5833

Abstract

Purpose

The purpose of this paper is to develop a theoretical model elaborating on the type of conditions that can inhibit (or at least temporarily hold back) “reactive conformance” in the wake of an increasing reliance on quantitative performance evaluations of academic research and researchers.

Design/methodology/approach

A qualitative study of a research group at a Swedish university who was recurrently exposed to quantitative performance evaluations of their research activities.

Findings

The empirical findings show how the research group under study exhibited a surprisingly high level of non-compliance and non-conformity in relation to what was deemed important and legitimate by the prevailing performance evaluations. Based on this, we identify four important qualities of pre-existing research/er ideals that seem to make them particularly resilient to an infiltration of an “academic performer ideal,” namely that they are (1) central and since-long established, (2) orthogonal to (i.e. very different from) the academic performer ideal as materialized by the performance measurement system, (3) largely shared within the research group and (4) externally legitimate. The premise is that these qualities form an important basis and motivation for not only criticizing, but also contesting, the academic performer ideal.

Originality/value

Extant research generally finds that the proliferation of quantitatively oriented performance evaluations within academia makes researchers adopt a new type of academic performer ideal which promotes research conformity and superficiality. This study draws upon, and adds to, an emerging literature that has begun to problematize this “reactive conformance-thesis” through identifying four qualities of pre-existing research/er ideals that can inhibit (or at least temporarily hold back) such “reactive research conformance.”

Details

Accounting, Auditing & Accountability Journal, vol. 33 no. 5
Type: Research Article
ISSN: 0951-3574

Keywords

Open Access
Article
Publication date: 7 March 2023

Sophie Soklaridis, Rowen Shier, Georgia Black, Gail Bellissimo, Anna Di Giandomenico, Sam Gruszecki, Elizabeth Lin, Jordana Rovet and Holly Harris

The purpose of this co-produced research project was to conduct interviews with people working in, volunteering with and accessing Canadian recovery colleges (RCs) to explore…

Abstract

Purpose

The purpose of this co-produced research project was to conduct interviews with people working in, volunteering with and accessing Canadian recovery colleges (RCs) to explore their perspectives on what an evaluation strategy for RCs could look like.

Design/methodology/approach

This study used a participatory action research approach and involved semistructured interviews with 29 people involved with RCs across Canada.

Findings

In this paper, the authors share insights from participants about the purposes of RC evaluation; key elements of evaluation; and the most applicable and effective approaches to evaluation. Participants indicated that RC evaluations should use a personalized, humanistic and accessible approach. The findings suggest that evaluations can serve multiple purposes and have the potential to support both organizational and personal-recovery goals if they are developed with meaningful input from people who access and work in RCs.

Practical implications

The findings can be used to guide evaluations in which aspects that are most important to those involved in RCs could inform choices, decisions, priorities, developments and adaptations in RC evaluation processes and, ultimately, in programming.

Originality/value

A recent scoping review revealed that although coproduction is a central feature of the RC model, coproduction principles are rarely acknowledged in descriptions of how RC evaluation strategies are developed. Exploring coproduction processes in all aspects of the RC model, including evaluation, can further the mission of RCs, which is to create spaces where people can come together and engage in mutual capacity-building and collaboration.

Details

Mental Health and Social Inclusion, vol. 28 no. 2
Type: Research Article
ISSN: 2042-8308

Keywords

Open Access
Article
Publication date: 10 October 2023

Hans-Peter Degn, Steven Hadley and Louise Ejgod Hansen

During the evaluation of European Capital of Culture (ECoC) Aarhus 2017, the evaluation organisation rethinkIMPACTS 2017 formulated a set of “dilemmas” capturing the main…

Abstract

Purpose

During the evaluation of European Capital of Culture (ECoC) Aarhus 2017, the evaluation organisation rethinkIMPACTS 2017 formulated a set of “dilemmas” capturing the main challenges arising during the design of the ECoC evaluation. This functioned as a framework for the evaluation process. This paper aims to present and discuss the relevance of the “Evaluation Dilemmas Model” as subsequently applied to the Galway 2020 ECoC programme evaluation.

Design/methodology/approach

The paper takes an empirical approach including auto-ethnography and interview data to document and map the dilemmas involved in undertaking an evaluation in two different European cities. Evolved via a process of practice-based research, the article addresses the development of and the arguments for the dilemmas model and considers its potential for wider applicability in the evaluation of large-scale cultural projects.

Findings

The authors conclude that the “Evaluation Dilemmas Model” is a valuable heuristic for considering the endogenous and exogenous issues in cultural evaluation.

Practical implications

The model developed is useful for a wide range of cultural evaluation processes including – but not limited to – European Capitals of Culture.

Originality/value

What has not been addressed in the academic literature is the process of evaluating ECoCs; especially how evaluators often take part in an overall process that is not just about the evaluation but also planning and delivering a project that includes stakeholder management and the development of evaluation criteria, design and methods.

Details

Arts and the Market, vol. 14 no. 1
Type: Research Article
ISSN: 2056-4945

Keywords

1 – 10 of over 14000