Making complex QA issues easier

International Journal of Health Care Quality Assurance

ISSN: 0952-6862

Article publication date: 18 July 2008

411

Citation

Hurst, K. (2008), "Making complex QA issues easier", International Journal of Health Care Quality Assurance, Vol. 21 No. 5. https://doi.org/10.1108/ijhcqa.2008.06221eaa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


Making complex QA issues easier

Article Type: Editorial From: International Journal of Health Care Quality Assurance, Volume 21, Issue 5

Health service change, like death and taxes, is a universal constant. We include in this issue, therefore, Peltokorpi et al.’s detailed operating room (OR) efficiency and effectiveness change management theory and practice article. Readers may feel like me that the OR project managers’ theatre utilisation (57-73 per cent before the study) improvement suggestions are reasonable and sensible. Yet, why do some stakeholders resist policy and practice changes? Indeed, readers will be surprised how high the change management failure rate is generally. Consequently, the authors’ guide to selecting and steering change management projects to success will be a useful tool in the manager’s workshop.

Another elegantly simple but vital topic – measuring accreditation surveyor practice – is explored by Greenfield and his Australian colleagues. Simply, do surveyor cognitive and behavioural styles influence accreditation structures, processes and outcomes? Many accreditation elements such as quality standards are easily controlled; but others like accreditor performance, on the other hand, are loose cannons. In research and development (R&D) terms this is a psychometric, specifically an inter-rater reliability, issue. Evaluating accreditor performance is not easy, which may explain the thin literature. Readers will be comfortable, however, with the authors’ ethnographic fieldwork approach, which leads to important findings that revealed three and a possible four accreditor styles. Why the fuss? Readers may at least be intrigued by matching their styles to the typology. More importantly, understanding accreditor typologies means, for example, having the ability to pair junior and senior accreditors for mentoring purposes.

Turning to broader and deeper QA issues, Kumar and Steinebach tackle the complex area of clinical-errors in a Six Sigma, poka-yokes (mistake avoidance) context. Prior to their helpful advice and recommendations the authors provide a comprehensive US healthcare activity and cost analysis, which is sobering and sometimes staggering. One especially important analysis describes the connection (indeed vicious spiral) between climbing US healthcare error rates and costs; notably the impact on mounting healthcare uninsured US citizens. Moreover, extended lengths of stay and inevitable cost hikes following treatment and care complications (e.g. drug error side-effects) compound the problem. Readers are left in no doubt that error trapping and avoidance programmes must be compulsory in any health service. Consequently, the authors’ cause and effect diagrams are not only illuminating but also practical. They do not shirk from exploring the ups and downs when enacting their recommendations, notably the barriers managers and practitioners face. Even so, the final section on likely error-rate and cost reductions following sound poka-yoke implementation is motivating.

Although the number of empirically-based psychiatric QA publications is increasing, readers may agree that materials are thin. We are delighted, therefore, to publish Joosten and his Netherland colleagues’ article in which they unravel methods for improving psychiatric service efficiency and effectiveness by merging the attributes from two relatively complex psychiatric policies – care programme (CP) and integrated care pathways (ICP). Conceptually, ICPs and CPs appear hard to merge but a good start is the authors’ operational-based definitions and analogies to help our understanding. It is also good to see the authors building on previous IJHCQA articles. In this case they use (along with other, related theoretical approaches) Towill’s and Keen et al.’s supply chain and networks arguments (Vol. 19, No. 4) to underpin their work. Their productive and non-productive psychiatric treatment and care activity discussion, clearly a fruitful R&D area, is particularly helpful.

Even though they are often anecdotal, I find vulnerable patients’ accounts of healthcare interventions that went wrong particularly troubling. Nevertheless, as a wake-up call they are always worth publishing. Kent’s anthropological approach to patient complaints in a Swedish healthcare context (although it could be any developed country) offers new theoretical and practical insights and is another reason to publish. She underlines how important it is for QA managers, practitioners and academics to periodically bring “patients” rights, expectations and perceptions’ to the forefront. Carefully looking at the gap between what dissatisfied patients expect and what actually happens reveals important lessons. If we are to re-establish public trust in our welfare services then more accounts need publishing so effective actions can be taken.

Returning to a simple but fundamental QA R&D issue – de la Orden and her Spanish colleagues calculate how many clinical audit cases should be sampled to represent the population from which they are drawn. Specifically, they explore emergency department (ED) QA issues. Clearly, as one of the main healthcare entry gates, ED evaluations are important. But what clinical indicators should be measured and how many patients do we include? Simply, in resource-constrained healthcare, staff have no desire to waste money and time measuring unnecessarily. The authors, therefore, borrow the lot quality assurance sampling (LQAS) method from industry. The mechanism is explained and the authors provide a useful look-up table example for deciding sample sizes. Also, unexpected and important side-issues emerge – for example, information management and technology’s importance to clinical audit, notably how inadequate patient record systems undermine clinical audits.

Patient satisfaction studies and reports are one of the commonest manuscripts submitted to IJHCQA. Consequently, is it possible to generate new insights? Bakar et al. manage to produce new issues on several levels. Although only a pilot study – with the aim of testing a modified SERVQUAL questionnaire’s psychometric properties – they compare and contrast SERVQUAL differences among Turkish patients from diverse socio-economic groups and between patients attending several Turkish hospitals. Their starting point is the ceiling effect (consistently high patient satisfaction scores) that evaluators encounter, which limits continuous quality improvement strategies. The SERVQUAL instrument, as readers know, was developed for non-public services. However, it is adaptable; so using a modified SERVQUAL questionnaire, not only were the authors able to raise the “ceiling”, but also show differences between patient groups and patients attending different hospitals. Also, they explore creative and useful validity and reliability measures – techniques that readers might emulate. They close with a fairly telling point – patient satisfaction is probably the most independent health service measure we have at our disposal. If we can overcome the retribution fears patients have about criticising staff then striving for the perfect patient expectation and satisfaction measure is justified.

Bakar et al.’s study is an excellent segue to the last précis in this editorial. Ozaki and his Japanese colleagues provide us with a detailed psychometric account of their Japanese Physician Job Satisfaction questionnaire. Wearing my other (workforce planning and development) hat, I know that health service staff turnover is a hot topic (for example, an English nationwide dataset has been released that offers fascinating insights). As the authors remind us, physician job satisfaction studies are thin. Their starting point is Japanese hospital physician recruitment and retention problems caused by rising workload’s de-motivation effects. Workforce planners sit up and take notice of these headlines because they fall into the healthcare policy and practice category known as “oil-tanker syndrome” problems; that is, course corrections today take several years to show effects. Although the article is mainly about the new instrument’s psychometric properties, and consequently only a pilot study, important issues emerge – notably predicting the reasons why physicians leave. Their validation and reliability testing techniques are Rolls-Royce quality – one that readers also could emulate. For example, the researchers strengthen their findings using methodological triangulation – using qualitative and quantitative approaches to explore the same problem. Moreover, although the latter is technically complex, it is clearly and carefully explained. Their evidence that the new SERVQUAL-based questionnaire consistently does what it says on the label is compelling. Consequently, I have no hesitation commending this article (and the other seven) to our readers.

Keith Hurst

Related articles