Search results
1 – 5 of 5Sharon J. Williams, Lynne Caley, Mandy Davies, Dominique Bird, Sian Hopkins and Alan Willson
Quality improvement collaboratives (QICs) are a popular approach to improving healthcare services and patient outcomes. This paper evaluates a QIC implemented by a large…
Abstract
Purpose
Quality improvement collaboratives (QICs) are a popular approach to improving healthcare services and patient outcomes. This paper evaluates a QIC implemented by a large, integrated healthcare organisation in Wales in the UK.
Design/methodology/approach
This evaluation study draws on two well-established evaluation frameworks: Kirkpatrick's approach to gather data on participant satisfaction and learning and Stake's approach to gather data and form judgements about the impact of the intervention. A mixed methods approach was taken which included documentary analysis, surveys, semi-structured interviews, and observation of the QIC programme.
Findings
Together the two frameworks provide a rounded interpretation of the extent to which the QIC intervention was fit-for-purpose. Broadly the evaluation of the QIC was positive with some areas of improvement identified.
Research limitations/implications
This study is limited to a QIC conducted within one organisation. Further testing of the hybrid framework is needed that extends to different designs of QICs.
Practical implications
A hybrid framework is provided to assist those charged with designing and evaluating QICs.
Originality/value
Evaluation studies are limited on QICs and if present tend to adopt one framework. Given the complexities of undertaking quality improvement within healthcare, this study uniquely takes a hybrid approach.
Details
Keywords
Lynne Caley, Sharon J. Williams, Izabela Spernaes, David Thomas, Doris Behrens and Alan Willson
It has become accepted practice to include an evaluation alongside learning programmes that take place at work, as a means of judging their effectiveness. There is a tendency to…
Abstract
Purpose
It has become accepted practice to include an evaluation alongside learning programmes that take place at work, as a means of judging their effectiveness. There is a tendency to focus such evaluations on the relevance of the intervention and the amount of learning achieved by the individual. The aim of this review is to examine existing evaluation frameworks that have been used to evaluate education interventions and, in particular, assess how these have been used and the outcomes of such activity.
Design/methodology/approach
A scoping review using Arskey and O’Malley’s five stage framework was undertaken to examine existing evaluation frameworks claiming to evaluate education interventions.
Findings
Forty five articles were included in the review. A majority of papers concentrate on learner satisfaction and/or learning achieved. Rarely is a structured framework mentioned, or detail of the approach to analysis cited. Typically, evaluations lacked baseline data, control groups, longitudinal observations and contextual awareness.
Practical implications
This review has implications for those involved in designing and evaluating work-related education programmes, as it identifies areas where evaluations need to be strengthened and recommends how existing frameworks can be combined to improve how evaluations are conducted.
Originality/value
This scoping review is novel in its assessment and critique of evaluation frameworks employed to evaluate work-related education programmes.
Details
Keywords
Ann Elizabeth Esain, Sharon J. Williams, Sandeep Gakhal, Lynne Caley and Matthew W. Cooke
This article aims to explore quality improvement (QI) at individual, group and organisational level. It also aims to identify restraining forces using formative evaluation and…
Abstract
Purpose
This article aims to explore quality improvement (QI) at individual, group and organisational level. It also aims to identify restraining forces using formative evaluation and discuss implications for current UK policy, particularly quality, innovation, productivity and prevention.
Design/methodology/approach
Learning events combined with work‐based projects, focusing on individual and group responses are evaluated. A total of 11 multi‐disciplinary groups drawn from NHS England healthcare Trusts (self‐governing operational groups) were sampled. These Trusts have different geographic locations and participants were drawn from primary, secondary and commissioning arms. Mixed methods: questionnaires, observations and reflective accounts were used.
Findings
The paper finds that solution versus problem identification causes confusion and influences success. Time for problem solving to achieve QI was absent. Feedback and learning structures are often not in place or inflexible. Limited focus on patient‐centred services may be related to past assumptions regarding organisational design, hence assumptions and models need to be understood and challenged.
Practical implications
The authors revise the Plan, Do, Study, Act (PDSA) model by adding an explicit problem identification step and hence avoiding solution‐focused habits; demonstrating the need for more formative evaluations to inform managers and policy makers about healthcare QI processes.
Originality/value
Although UK‐centric, the quality agenda is a USA and European theme, findings may help those embarking on this journey or those struggling with QI.
Details
Keywords