Search results
1 – 10 of over 41000Marjorita Sormunen, Terhi Saaranen, Kerttu Tossavainen and Hannele Turunen
This paper aims to present the process evaluation for a two‐year (2008‐2010) participatory action research project focusing on home‐school partnership in health learning…
Abstract
Purpose
This paper aims to present the process evaluation for a two‐year (2008‐2010) participatory action research project focusing on home‐school partnership in health learning, undertaken within the Schools for Health in Europe (SHE) in Eastern Finland.
Design/methodology/approach
Two intervention schools and two control schools (grade 5 pupils, parents, and selected school personnel) participated in a study. Process evaluation data were collected from intervention schools after 10 months of participation, by interviewing two classroom teachers and three families. In addition, program documents and relevant statistics were collected from schools during the intervention.
Findings
Teachers' opinions on the development process varied from more concrete expectations (School A teacher) to overall satisfaction to implementation (School B teacher). Parents believed that their children would benefit from the project later in life. The context and differences of the school environments were likely to affect the development process at the school level.
Research limitations/implications
This paper demonstrates a process evaluation in two schools and, therefore, limits the generalizability of the findings.
Practical implications
The process evaluation was an essential part of this intervention study and may provide a useful structure and an example for process evaluation for future school‐based health intervention studies.
Originality/value
This study highlights the importance of planning the process evaluation structure before the start of the intervention, brings out the relevance of systematically assessing the process while it is ongoing, and illustrates process evaluation in an action research project.
Details
Keywords
Stephen Case, Charlie E. Sutton, Joanne Greenhalgh, Mark Monaghan and Judy Wright
This study aims to examine the extent to which “What Works” reviews in youth justice enable understanding of the features of effectiveness (what works, for whom, in what…
Abstract
Purpose
This study aims to examine the extent to which “What Works” reviews in youth justice enable understanding of the features of effectiveness (what works, for whom, in what circumstances and why?) specified in the Effects–Mechanisms–Moderators–Implementation–Economic cost (EMMIE) framework.
Design/methodology/approach
The EMMIE framework examined findings within a sample of “What Works” style reviews of preventative youth justice intervention effectiveness.
Findings
“What Works” style reviews of evaluations of preventative youth justice interventions often omit the requisite details required to examine all of the necessary elements of effectiveness contained within the EMMIE framework. While effectiveness measures were typically provided, the dominant evaluation evidence-base struggles to consider moderators of effect, mechanisms of change, implementation differences and cost-effectiveness. Therefore, “What Works” samples cannot facilitate sufficient understanding of “what works for whom, in what circumstances and why?”. The authors argue that Realist Synthesis can fill this gap and shed light on the contexts that shape the mechanisms through which youth justice interventions work.
Originality/value
The authors extended the approach adopted by an earlier review of effectiveness reviews (Tompson et al., 2020), considering more recent reviews of the effectiveness of preventative interventions using the EMMIE framework. Unlike previous reviews, the authors prioritised the utility of the EMMIE framework for assessing the factors affecting the effectiveness of preventative interventions in youth justice.
Details
Keywords
Carolyn Steele Gray and James Shaw
Models of integrated care are prime examples of complex interventions, incorporating multiple interacting components that work through varying mechanisms to impact numerous…
Abstract
Purpose
Models of integrated care are prime examples of complex interventions, incorporating multiple interacting components that work through varying mechanisms to impact numerous outcomes. The purpose of this paper is to explore summative, process and developmental approaches to evaluating complex interventions to determine how to best test this mess.
Design/methodology/approach
This viewpoint draws on the evaluation and complex intervention literatures to describe the advantages and disadvantages of different methods. The evaluation of the electronic patient reported outcomes (ePRO) mobile application and portal system is presented as an example of how to evaluate complex interventions with critical lessons learned from this ongoing study.
Findings
Although favored in the literature, summative and process evaluations rest on two problematic assumptions: it is possible to clearly identify stable mechanisms of action; and intervention fidelity can be maximized in order to control for contextual influences. Complex interventions continually adapt to local contexts, making stability and fidelity unlikely. Developmental evaluation, which is more conceptually aligned with service-design thinking, moves beyond these assumptions, emphasizing supportive adaptation to ensure meaningful adoption.
Research limitations/implications
Blended approaches that incorporate service-design thinking and rely more heavily on developmental strategies are essential for complex interventions. To maximize the benefit of this approach, three guiding principles are suggested: stress pragmatism over stringency; adopt an implementation lens; and use multi-disciplinary teams to run studies.
Originality/value
This viewpoint offers novel thinking on the debate around appropriate evaluation methodologies to be applied to complex interventions like models of integrated care.
Details
Keywords
Lynne Caley, Sharon J. Williams, Izabela Spernaes, David Thomas, Doris Behrens and Alan Willson
It has become accepted practice to include an evaluation alongside learning programmes that take place at work, as a means of judging their effectiveness. There is a tendency to…
Abstract
Purpose
It has become accepted practice to include an evaluation alongside learning programmes that take place at work, as a means of judging their effectiveness. There is a tendency to focus such evaluations on the relevance of the intervention and the amount of learning achieved by the individual. The aim of this review is to examine existing evaluation frameworks that have been used to evaluate education interventions and, in particular, assess how these have been used and the outcomes of such activity.
Design/methodology/approach
A scoping review using Arskey and O’Malley’s five stage framework was undertaken to examine existing evaluation frameworks claiming to evaluate education interventions.
Findings
Forty five articles were included in the review. A majority of papers concentrate on learner satisfaction and/or learning achieved. Rarely is a structured framework mentioned, or detail of the approach to analysis cited. Typically, evaluations lacked baseline data, control groups, longitudinal observations and contextual awareness.
Practical implications
This review has implications for those involved in designing and evaluating work-related education programmes, as it identifies areas where evaluations need to be strengthened and recommends how existing frameworks can be combined to improve how evaluations are conducted.
Originality/value
This scoping review is novel in its assessment and critique of evaluation frameworks employed to evaluate work-related education programmes.
Details
Keywords
Bettina Friedrich and Oliver John Mason
Football exercise as an intervention for people with severe mental health problems has seen an increasing interest in the past years. To date, there is, however, no comprehensive…
Abstract
Purpose
Football exercise as an intervention for people with severe mental health problems has seen an increasing interest in the past years. To date, there is, however, no comprehensive review of the empirical evidence regarding the effectiveness of these interventions. In this review, the authors have comprised the research findings from the peer-review literature as well as the theoretical approaches to football exercise as an adjunct treatment. This overview will be informative to everybody who is planning to develop a football intervention for this population as well as to the people who are preparing evaluation studies that measure the effectiveness of such interventions. The paper aims to discuss these issues.
Design/methodology/approach
The authors identified research papers in the peer-review literature that feature empirical findings on “football interventions” that aim at improving mental and/or physical well-being in participants with mental health problems. The authors are using the term “football intervention” here in the sense that the participants actively took part in football exercise, so the authors excluded studies in which the participants only watched football or used football as a metaphor to discuss mental health problems. In a table, the authors indicate the definition of the target group, targeted outcomes, measured outcomes, form and frequency of the intervention as well as the research method(s).
Findings
The authors identified 16 studies on 15 projects. The majority of studies were qualitative and had positive findings in which the participants reported increased well-being and connectedness, elevation of symptoms and improved physical well-being. The outcomes of the quantitative studies, however, were mixed with some results suggesting that not all intended goals were achieved. There seems to be a need for more quantitative studies to triangulate the qualitative findings. Interestingly, most interventions take place in the UK. Many studies fail to give detailed methodological information and often the aims of the interventions are vague or not stated at all.
Research limitations/implications
Due to the heterogeneity of the studies and relative scarcity of evaluation projects on football interventions for people with mental health problems, the authors could not conduct an in-depth systematic review. Furthermore, the information on methods was often unsatisfying and despite efforts to get more detailed input from the authors of cited papers, those gaps could not always be filled. Instead of coming up with a crystal-clear summary of whether and how football interventions work for everybody, topics were identified that need to be addressed in the planning of interventions, in evaluation studies, in implementation efforts and in the theoretical discourse.
Practical implications
This paper constitutes a helpful overview for everybody who is interested in the theoretical background of football interventions for people with mental health problems, for people who are planning to develop respective interventions, for researchers who engage in evaluation projects that look into the effectiveness of football interventions (or similar exercise interventions) as well as for the people who are interested in how football interventions can be implemented. This paper is likely to make a contribution to the advancement of alternative exercise interventions that aim at improving mental, physical and social health in people with mental health problems.
Social implications
This paper will help putting the topic of football interventions (and similar, alternative exercise interventions) further up on the public health agenda by providing an overview of the empirical evidence at hand and by specifying advantages of the approach as well as pointing out actions that need to be taken to make football a recognised, evidence based and viable option for adjunct mental health treatment that is attractive to potential participants as well as funders as well as to the potential participants.
Originality/value
There is no comprehensive summary to date that provides a (reasonably) systematic overview of empirical findings for football interventions for people with MH problems. Furthermore, the literature on the theoretical background of these interventions has been somewhat patchy and heterogonous. This paper aims at filling both these gaps and identifies the issues that need to be covered in the planning of respective interventions and evaluations. This paper will be useful to everybody who is developing football interventions (or similar alternative adjunct exercise interventions), who is conducting evaluation research in this area and who is interested in the implementation of football interventions.
Details
Keywords
Margaret Barry, Colette Reynolds, Anne Sheridan and Róisín Egenton
This paper reports on the implementation and evaluation of the JOBS programme in Ireland. This is a training intervention to promote re‐employment and improve mental health among…
Abstract
This paper reports on the implementation and evaluation of the JOBS programme in Ireland. This is a training intervention to promote re‐employment and improve mental health among unemployed people that was implemented on a pilot basis in the border region of the Republic and Northern Ireland. Programme participants were unemployed people recruited from local training and employment offices and health agencies. The evaluation indicated that the programme was implemented successfully and led to improved psychological and re‐employment outcomes for the intervention group, lasting up to 12 months post‐intervention. This paper reflects on the implementation issues that arose in adapting an international evidence‐based programme to the local setting and considers the implications of the evaluation findings for the roll out of the programme on a larger scale.
Details
Keywords
Per Øystein Saksvik, Margrethe Faergestad, Silje Fossum, Oyeniyi Samuel Olaniyan, Øystein Indergård and Maria Karanika-Murray
The purpose of this paper is to examine whether a successful implementation of an intervention could result in an effect evaluated independently from a process evaluation. It was…
Abstract
Purpose
The purpose of this paper is to examine whether a successful implementation of an intervention could result in an effect evaluated independently from a process evaluation. It was achieved by evaluating the effects of an intervention, the “employeeship program,” designed to strengthen the psychosocial work environment through raising employees’ awareness and competence in interpersonal relationships and increasing their responsibility for their everyday work and working environment.
Design/methodology/approach
An employeeship intervention program was developed to improve the psychosocial work environment through reducing conflict among employees and strengthening the social community, empowering leadership, and increasing trust in management. An earlier process evaluation of the program found that it had been implemented successfully. The present effect evaluation supplemented this by examining its effect on the psychosocial work environment using two waves of the organization’s internal survey and comparing changes in the intervention unit at two points and against the rest of the organization.
Findings
The intervention was effective in improving the psychosocial work environment through reducing conflicts among employees and strengthening the social community, empowering leadership, and increasing trust in management.
Research limitations/implications
More attention should be paid to developing and increasing positive psychosocial experiences while simultaneously reducing negative psychosocial experiences, as this employeeship intervention demonstrated.
Practical implications
An intervention focusing on employeeship is an effective way to achieve a healthier psychosocial work environment with demonstrable benefits for individuals and the working unit.
Originality/value
Although organizational-level interventions are complex processes, evaluations that focus on process and effect can offer insights into the workings of successful interventions.
Details
Keywords
Phuong Leung, Emese Csipke, Lauren Yates, Linda Birt and Martin Orrell
This study aims to explore the utility of collaborative knowledge sharing with stakeholders in developing and evaluating a training programme for health professionals to implement…
Abstract
Purpose
This study aims to explore the utility of collaborative knowledge sharing with stakeholders in developing and evaluating a training programme for health professionals to implement a social intervention in dementia research.
Design/methodology/approach
The programme consisted of two phases: 1) development phase guided by the Buckley and Caple’s training model and 2) evaluation phase drew on the Kirkpatrick’s evaluation model. Survey and interview data was collected from health professionals, people with dementia and their supporters who attended the training programme, delivered or participated in the intervention. Qualitative data was analysed using the framework analysis.
Findings
Seven health professionals participated in consultations in the development phase. In the evaluation phase, 20 intervention facilitators completed the post one-day training evaluations and three took part in the intervention interviews. Eight people with dementia and their supporters from the promoting independence in dementia feasibility study participated in focus groups interviews. The findings show that intervention facilitators were satisfied with the training programme. They learnt new knowledge and skills through an interactive learning environment and demonstrated competencies in motivating people with dementia to engage in the intervention. As a result, this training programme was feasible to train intervention facilitators.
Practical implications
The findings could be implemented in other research training contexts where those delivering research interventions have professional skills but do not have knowledge of the theories and protocols of a research intervention.
Originality/value
This study provided insights into the value of collaborative knowledge sharing between academic researchers and multiple non-academic stakeholders that generated knowledge and maximised power through building new capacities and alliances.
Details
Keywords
Hamid Roodbari, Karina Nielsen, Carolyn Axtell, Susan E. Peters and Glorian Sorensen
Realist evaluation seeks to answer the question of “what works for whom in which circumstances?” through developing and testing middle range theories (MRTs). MRTs are programme…
Abstract
Purpose
Realist evaluation seeks to answer the question of “what works for whom in which circumstances?” through developing and testing middle range theories (MRTs). MRTs are programme theories that outline how certain mechanisms of an intervention work in a specific context to bring about certain outcomes. In this paper, the authors tested an initial MRT about the mechanism of participation. The authors used evidence from a participatory organisational intervention in five worksites of a large multi-national organisation in the US food service industry.
Design/methodology/approach
Qualitative data from 89 process tracking documents and 24 post-intervention, semi-structured interviews with intervention stakeholders were analysed using template analysis.
Findings
The operationalised mechanism was partial worksite managers’ engagement with the research team. Six contextual factors (e.g. high workload) impaired participation, and one contextual factor (i.e. existing participatory practices) facilitated participation. Worksite managers’ participation resulted in limited improvement in their awareness of how working conditions can impact on their employees’ safety, health, and well-being. Based on these findings, the authors modified the initial MRT into an empirical MRT.
Originality/value
This paper contributes to the understanding of “what works for whom in which circumstances” regarding participation in organisational interventions.
Details
Keywords
Describes a methodological framework for evaluating intervention programmes to establish and develop quality assurance systems in general hospitals, based on the authors’…
Abstract
Describes a methodological framework for evaluating intervention programmes to establish and develop quality assurance systems in general hospitals, based on the authors’ experience in participating in a specific intervention programme in quality assurance. Both the approach and the design of the evaluation programme were shaped by the unique characteristics of this intervention programme. The evaluation programme was based on a model of organizational behaviour and change developed especially for the introduction of quality assurance systems into hospitals. With modification, the programme can also be used to evaluate other intervention programmes.
Details