Editorial – all changed, changed utterly: again?

Clinical Governance: An International Journal

ISSN: 1477-7274

Article publication date: 1 January 2014

557

Citation

Gillies, N.H.a.A. (2014), "Editorial – all changed, changed utterly: again?", Clinical Governance: An International Journal, Vol. 19 No. 1. https://doi.org/10.1108/CGIJ-11-2013-0037

Publisher

:

Emerald Group Publishing Limited


Editorial – all changed, changed utterly: again?

Article Type: Editorial – all changed, changed utterly: again? From: Clinical Governance: An International Journal, Volume 19, Issue 1

Navigating the regulatory maze

1. Introduction

In 1998, an editorial in the British Medical Journal (Smith, 1998), borrowed the phrase “All changed, changed utterly” from the W.B. Yeats poem “Easter 1916” to describe the impact of the Bristol case and the subsequent enquiries into what went wrong.

The crisis in Bristol rang the death-knell for the relationship of trust between the medical profession and the public; and of professional self-regulation. It eventually resulted in the reform of the General Medical Council (the GMC) and the transfer of audit and regulatory responsibility away from clinicians, to an independent authority, the Healthcare Commission. Smith wrote:

The GMC identified several issues that arose during the course of its inquiry that concern the practice of medicine and surgery generally and that need to be addressed by the medical profession.

* The need for clearly understood clinical standards.

* How clinical competence and technical expertise are assessed and evaluated.

* Who carries the responsibility in team based care.

* The training of doctors in advanced procedures.

* How to approach the so called learning curve of doctors undertaking established procedures.

* The reliability and validity of the data used to monitor doctors’ personal performance.

* The use of medical and clinical audit.

* The appreciation of the importance of factors, other than purely clinical ones, that can affect clinical judgment, performance, and outcome.

* The responsibility of a consultant to take appropriate actions in responses to concerns about his or her performance.

* The factors which seem to discourage openness and frankness about doctors’ personal performance.

* How doctors explain risks to patients.

* The ways in which people concerned about patients’ safety can make their concerns known.

* The need for doctors to take prompt action at an early stage when a colleague is in difficulty, in order to offer the best chance of avoiding damage to patients and the colleague and of putting things right.

And yet, in 2013, we once again face calls for change in the health care system and its regulation. Revelations about failings in care at Stafford Hospital have been followed by further failings at Morecambe Bay and most recently in Colchester. In this extended editorial we shall consider whether the responses to these most recent events are any more likely to prevent a repetition than the post-Bristol responses.

The complexity of the current situation has been exacerbated by the failings of the regulators themselves.

2. The events that inspired this editorial

2.1 The events at Stafford

The report of the public enquiry into the events at Mid Staffordshire General Hospital NHS Trust (Francis, 2013) opens with the following paragraph:

Between 2005 and 2008 conditions of appalling care were able to flourish in the main hospital serving the people of Stafford and its surrounding area. During this period this hospital was managed by a Board which succeeded in leading its Trust (the Mid Staffordshire General Hospital NHS Trust) to foundation trust (FT) status. The Board was one which had largely replaced its predecessor because of concerns about the then NHS Trust’s performance. In preparation for its application for FT status, the Trust had been scrutinised by the local Strategic Health Authority (SHA) and the Department of Health (DH). Monitor (the independent regulator of NHS foundation trusts) had subjected it to assessment. It appeared largely compliant with the then applicable standards regulated by the Healthcare Commission (HCC). It had been rated by the NHS Litigation Authority (NHSLA) for its risk management. Local scrutiny committees and public involvement groups detected no systemic failings. In the end, the truth was uncovered in part by attention being paid to the true implications of its mortality rates, but mainly because of the persistent complaints made by a very determined group of patients and those close to them. This group wanted to know why they and their loved ones had been failed so badly.

The first inquiry was held by the Health Care Commission (the precursor to the Care Quality Commission). The first inquiry report contained damning criticism of the care provided by the Trust, drawing out a number of conclusions, including:

There was a lack of basic care across a number of wards and departments at the Trust.

* The culture at the Trust was not conducive to providing good care for patients or providing a supportive working environment for staff; there was an atmosphere of fear of adverse repercussions; a high priority was placed on the achievement of targets; the consultant body largely dissociated itself from management; there was low morale among staff; there was a lack of openness and an acceptance of poor standards.

* Management thinking during the period under review was dominated by financial pressures and achieving FT status, to the detriment of quality of care.

* There was a management failure to remedy the deficiencies in staff and governance that had existed for a long time, including an absence of effective clinical governance.

* There was a lack of urgency in the Board’s approach to some problems, such as those in governance.

* Statistics and reports were preferred to patient experience data, with a focus on systems, not outcomes.

* There was a lack of internal and external transparency regarding the problems that existed at the Trust.

One of the key issues raised in the report was the role played by external organisations which had oversight of the Trust, leading to the following paragraph and recommendation:

Having considered the evidence and representations referred to in Section H, I conclude that there is a need for an independent examination of the operation of each commissioning, supervising and regulatory body, with respect to their monitoring function and capacity to identify hospitals failing to provide safe care: in particular:

Recommendation 16: The Department of Health should consider instigating an independent examination of the operation of commissioning, supervisory and regulatory bodies in relation to their monitoring role at Stafford hospital with the objective of learning lessons about how failing hospitals are identified. “ (Health Care Commission, 2010).

The Department of Health did indeed undertake such an examination, in the form of a performance and capability review of the Care Quality Commission (United Kingdom Department of Health, 2012). The Department’s National Director for Primary Care, Dr David Colin-Thomé, was also invited to review the lessons for commissioners following this first enquiry (Colin-Thomé, 2009).

As “challenges”, the performance and capability review identified both the need to focus on quality and act swiftly to eliminate poor quality care and the need for strategic direction and clear measures of success.

That review’s key findings were organised along six key lines of inquiry: strategy; resources and prioritisation; accountability; engagement and communication; development of the regulatory model; and delivery of the regulatory model. It made twenty three recommendations, some of which are summarised here.

The respective regulatory bodies needed to clarify their distinct roles and they needed also to clarify what they were trying to change and accomplish in specific sectors, over time. They needed to improve strategic planning and analytical capacity, develop clearer measures of their own success and to explore alternative regulatory options. The regulatory regime needed to be transparent, so that there could be consistency between the CQC and its locally-based inspection teams. The latter needed to have access to specific expertise in the sector under investigation. These and the other recommendations suggest a plan of investigation tailored to the specific site, rather than a generic model.

The Colin-Thomé review of lessons learnt for commissioners and performance managers following the Healthcare Commission investigation at Mid-Staffordshire found:

There was over reliance by the PCTs and the SHAs on Monitor and the Healthcare Commission to ensure quality of care at Mid Staffordshire hospital trust … the PCTs and SHAs relied on too few sources. They assumed that regulation of quality would be fulfilled by the Healthcare Commission and Monitor and took on a lesser role than was appropriate or required of them. The regulators should also have taken a stronger role in sharing their concerns more explicitly with the PCT when they came to light.There was also lack of clarity over the respective roles of the SHA, PCT and Monitor once Mid Staffordshire hospital trust achieved foundation trust status. The SHA and PCT, in particular, were unsure of their ongoing management relationship with the Foundation Trust, in relation to the independent regulation role taken on by Monitor. Monitor have received criticism for approving Mid Staffordshire’s foundation trust status when the trust was to be investigated by the Healthcare Commission, and have since reviewed their process for approval.There was an obvious lack of clarity over the respective roles and responsibilities of each of the organisations with an over reliance that another organisation was responsible for identifying, and acting on concerns, which when they did not, allowed failure to continue.

2.2 The events at Morecambe Bay

The maternity unit at Furness General Hospital (FGH) was the subject of a wide range of negative media reports including the leaked disclosure that the maternity unit had the highest death rates in England following an inquest at which the Coroner issued a rule 43 letter issued to the Trust in June 2011. The Rule 43 letter stated that action should be taken to prevent any further deaths. Although the inquest case was in relation to a death in 2008 the coroner believed that some aspects of the care failures in the case were still relevant in 2011. The areas highlighted in this letter included: records management, team working, and the pressure of work and continuity of care.

In September 2011, Cumbria Constabulary assigned 15 officers to investigate the deaths of at least four babies and two mothers during 2008 at FGH; it was alleged that midwives at the hospital destroyed medical records to cover up their mistakes. Subsequently, in June 2013, Cumbria Constabulary announced they would only be pursuing one case and other complaints would not proceed to a criminal prosecution.

In 2011, a damning report by the Care Quality Commission (CQC) threatened to close down the maternity ward at FGH entirely by 21 November 2011 if major changes were not implemented. The Nursing and Midwifery Council identified 19 areas requiring urgent improvement, including governance, risk management, collaborative working and leadership.

In November 2011 with the investigation ongoing, the University Hospitals of Morecambe Bay NHS Trust announced plans to replace outdated equipment and completely rebuild FGH’s maternity ward at a cost of £5 million. A random inspection in September 2012 by the CQC found that recommended changes had been made and found quality and safety standards being met.

However, the problems at the Morecambe Bay Trust highlight problems in the regulatory regime (see below).

2.3 The events at Colchester

In November 2013, the CQC published an inspection report for Colchester Hospital under its new regulatory regime. The report found that action was needed in the areas of care and welfare of people who use services, assessing and monitoring the quality of service and records.

We found that the provider did not have adequate systems in place to maintain the safety and welfare of people receiving treatment on the cancer pathway. We found that the medical records of people’s treatments and appointments had been changed and were not accurate on the cancer wait time system. The changes did not correlate with the treatment recorded in the patient’s medical records. This meant that people were at risk of delayed treatment and in 22 cases people were at risk of or did experience delays in their care which could have a negative impact on the person using the service. We have provided the names of those patients affected to the trust who have assured us that they will ensure those patient’s treatments are reviewed to ensure their safety and welfare. The Trust did not have sufficient arrangements to promote effective performance of the cancer service. We found that the concerns raised by staff in relation to changes made to people’s cancer pathways were not appropriately managed, investigated or responded to by senior staff of the Trust. Staff we spoke to provided examples of bullying and harassment by the management team that had been reported in respect of changes of the cancer pathways. We found that managers did not show clear leadership in a way that ensured the safety and welfare of patients by providing a high quality of care. We viewed an internal investigation completed regarding changes to the cancer pathway which was not appropriately managed by the trust (Care Quality Commission, 2013).

2.4 The response of the Care Quality Commission to Stafford

The events at Stafford predated the establishment of the CQC. The first enquiry report was carried out by the Healthcare Commission, their precursor organisation. The Francis enquiry highlighted the deficiencies in the regulatory equipment of the CQC:

The current structure of standards, laid down in regulation, interpreted by categorisation and development in guidance, and measured by the judgement of a regulator, is clearly an improvement on what has gone before, but it requires improvement.

2.5 The response of the Care Quality Commission to Morecambe Bay

In 2010, the CQC inspected maternity services at Furness General Hospital (part of the University Hospitals of Morecambe Bay (UHMB)) and the Unit was given a clean bill of health. In 2011, the CQC carried out a further review in response to the coroner’s concerns. In summary, they reported:

We found that University Hospitals of Morecambe Bay NHS Foundation Trust was not meeting one or more essential standards. We have taken enforcement action against the provider to protect the safety and welfare of people who use services (CQC, 2011).

In 2013, Grant Thornton published a report into the conduct of the CQC in this period. They were asked to investigate a range of matters including why the CQC had not identified serious issues in the provision of maternity services at UHMBT.

Key findings included:

* Poor governance at CQC (specifically around operational weaknesses) as well as questionable decision making by the Regulator.

* No evidence of a “cover-up”.

* Evidence of the apparently deliberate suppression of an internal CQC report entitled “Summary of the internal review of the regulatory decisions and activity at UHMB”.

* Indications that, at certain times, a dysfunctional relationship may well have existed between the various stakeholders, most notably between the SHA and CQC, especially regarding information sharing. We note that these observations echo problems encountered during an earlier period at Mid-Staffordshire Foundation Trust.

* The lack of an obligation for UHMB to disclose critical information (specifically in this case, findings of a review of maternity services by Professor Dame Pauline Fielding conducted in 2010) to CQC, which if it had been disclosed may well have resulted in a different outcome at registration. (Grant Thornton, 2013).

In their response to the Grant Thornton report, the CQC initially chose to redact the names of the CQC staff involved on legal advice. Following criticism and the public intervention of the Information Commissioner, the names were revealed.

2.6 The response of the Care Quality Commission to Colchester

The inspection at Colchester was carried out under the new inspection regime, and at the time of writing appears to being regarded as vindication of the new inspection regime implemented under the New Chief Inspector of Hospitals.

3. The Government response to these events

3.1 A public inquiry: the Francis Report

The Terms of Reference for this Inquiry were as follows:

To examine the operation of the commissioning, supervisory and regulatory organisations and other agencies, including the culture and systems of those organisations in relation to their monitoring role at Mid Staffordshire NHS Foundation Trust between January 2005 and March 2009 and to examine why problems at the Trust were not identified sooner, and appropriate action taken. This includes, but is not limited to, examining, the actions of the Department of Health, the local strategic health authority, the local primary care trusts, the Independent Regulator of NHS Foundation Trusts (Monitor), the Care Quality Commission, the Health and Safety Executive, local scrutiny and public engagement bodies and the local CoronerWhere appropriate, to build on the evidence given to the first inquiry and its conclusions, without duplicating the investigation already carried out, and to conduct the inquiry in a manner which minimises interference with the Mid Staffordshire NHS Foundation Trust’s work in improving its service to patients.

On the day the Francis report was presented to parliament, the Prime Minister announced his instruction to the Care Quality Commission to appoint a Chief Inspector of Hospitals. He also announced that the Medical Director of the NHS, Sir Bruce Keogh, was to review the hospitals with the highest mortality rates. The PM had requested Anne Clwyd MP to conduct a review into the way the NHS manages complaints by patients, their carers and their families. In addition, he announced that he had commissioned Professor Don Berwick, an international expert on health care safety, “to make zero harm a reality in our NHS”.

3.2 Intervention in the Care Quality Commission

After the performance and capability review by the Department of Health (United Kingdom Department of Health, 2012), conducted concurrently with the preparation of the Francis report, the CQC has commissioned advice (Walshe and Phipps, 2013) on the construction of a strategic framework to guide its programme of evaluation.

Walshe and Phipps have produced a “logic model”, based on CQC’s main statutory functions (registration, compliance, enforcement and information provision). This allows reflection and planning based on the various approaches and forms of utility related to each statutory function. The model challenges the idea that each regulatory mechanism works through a single, self-evident mechanism.

On the basis of this model and its comparison with current practice, they identify an agenda for further research and they draw three main conclusions:

* The current regulatory model is designed to achieve a basic, “safety net” level of regulation. Inspection against a limited subset of standards leaves the rationale for decisions unclear.

* The impact of regulatory interventions varies over time and between sectors and organisations: there is no simple cause/effect relationship between intervention and success.

* Our most important recommendation is that changes in the regulatory model will need to be grounded in empirical analysis of current practice and wider regulatory knowledge … ..intended mechanisms should be spelt out explicitly so that they can be tested and challenged…

The “logic model” provides substantial insight into alternative views on the purposes of the various regulatory components, and the mechanisms by which they exert their effects. However, it does not provide the new Chief Inspector of Hospitals with a clear methodological guide of the sort provided by Keogh (2013).

3.3 Keogh review of 14 hospitals

The review of the 14 hospitals with the highest SHMR was conducted by Sir Bruce Keogh with terms of reference: to look for any sustained failings in the quality of care and treatment being provided to patients there; to see whether these trusts were doing enough to rectify such failures, and whether external management interventions ought to be provided or regulatory actions taken.

Keogh describes the methodology for his evaluation and recommends its adoption and improvement by the new Chief Inspector of Hospitals. In précis, it includes the following elements:

* Detailed analysis of a vast array of hard data and soft intelligence held by many different parts of the system. This helped identify key lines of enquiry for the review teams, allowing them to ask penetrating questions during their site visits and to focus in on areas of most concern.

* Planned and unannounced site visits, lasting 2-3 days by large (around 15-20 strong) multi-disciplinary review teams.

* Review teams placed huge value on the insight they could gain from listening to staff and patients as well as to those who represented the interests of the local population, including local clinical commissioning groups and Members of Parliament. Unconstrained by a rigid set of tick box criteria, the use of patient and staff focus groups was probably the single most powerful aspect of the review process and ensured that a cultural assessment, not just a technical assessment, could be made.

* Risk Summit (convened in order to make judgements about the quality of care being provided and agree any necessary actions, including offers of support to the hospitals concerned): a meeting between patient representatives and key members of the review team with senior executives from the trust board, the CCG, the local Health and Wellbeing Board, local government authorities, NHS England, Monitor, CQC, the Department of Health, and representatives of the General Medical Council and the Nursing and Midwifery Council and post-graduate medical deaneries.

Keogh reported on “areas for improvement” under the following headings:

* Patient experience.

* Safety.

* Workforce.

* Clinical and operational effectiveness.

* Leadership and governance.

The findings under several of these headings reflect the shortcomings of evaluation based solely on scrutiny of summary statistical data. For example, Keogh states:

The data analysis indicated that only [one of the hospitals] was an outlier across the majority of patient experience measures. The visits to the hospitals, however, established that this was in fact a key area in which improvement was needed at most of the trusts.Contrary to the pre-visit data, when the review teams visited the hospitals, they found frequent examples of inadequate numbers of nursing staff in some ward areas. The reported data did not provide a true picture of the numbers of staff actually working on the wards. In some instances, there were insufficient nursing establishments, while in others there were differences between the funded nursing establishments and the actual numbers of registered nurses and support staff available to provide care on a shift by shift basis.

Data driven performance improvement in these trusts, instead of stimulating focused and concerted effort to diagnose system failures and propose remedies, was dogged by disputes over the accuracy of coding and impeded by problems related to:

* the complexity of the data and the difficulties this presents for professionals, patients and the public who want to understand and use it;

* the shortage of key skills in data analysis and interpretation available to trust boards and management teams; and

* consistency of metrics and information to be used to monitor quality on an ongoing basis.

In short, these were not learning organisations:

We did not see sufficient evidence to demonstrate that many Board and clinical leaders were effectively driving quality improvement. In a number of trusts, the capability of medical directors and/or directors of nursing was questioned by the review teams.This review has shown the continuing challenge hospitals are facing around the use and interpretation of aggregate mortality statistics. The significant impact that coding practice can have on these statistical measures, where excess death rates can rise or fall without any change in the number of lives saved, is sometimes distracting boards from the very practical steps that can be taken to reduce genuinely avoidable deaths in our hospitals.The review teams often witnessed information being used for justification: to confirm a particular viewpoint the trust had of a specific issue. Information was only rarely used in an enquiring manner - in order to seek out and understand the root cause of a problem area.The need for action was signalled by the data analysis, but this only gave the teams a partial understanding of where improvements were needed. A full picture of severity and urgency could only be established during the review team visits to the hospitals.We also found a deficit in the high level skills and sophisticated capabilities necessary at board level to draw insight from the available data and then use it to drive continuous improvement.

Having conducted his review, Keogh has formulated eight “ambitions” which ought to be achievable within two years:

1. We will have made demonstrable progress towards reducing avoidable deaths in our hospitals, rather than debating what mortality statistics can and can’t tell us about the quality of care hospitals are providing.

2. The boards and leadership of provider and commissioning organisations will be confidently and competently using data and other intelligence for the forensic pursuit of quality improvement. They, along with patients and the public, will have rapid access to accurate, insightful and easy to use data about quality at service line level.

3. Patients, carers and members of the public will increasingly feel like they are being treated as vital and equal partners in the design and assessment of their local NHS. They should also be confident that their feedback is being listened to and see how this is impacting on their own care and the care of others.

4. Patients and clinicians will have confidence in the quality assessments made by the Care Quality Commission, not least because they will have been active participants in inspections.

5. No hospital, however big, small or remote, will be an island unto itself. Professional, academic and managerial isolation will be a thing of the past.

6. Nurse staffing levels and skill mix will appropriately reflect the caseload and the severity of illness of the patients they are caring for and be transparently reported by trust boards.

7. Junior doctors in specialist training will not just be seen as the clinical leaders of tomorrow, but clinical leaders of today. The NHS will join the best organisations in the world by harnessing the energy and creativity of its 50,000 young doctors.

8. All NHS organisations will understand the positive impact that happy and engaged staff have on patient outcomes, including mortality rates, and will be making this a key part of their quality improvement strategy.

On the topic of patient experience, Keogh also reports:

There was a tendency in some of the hospitals to view complaints as something to be managed, focusing on the production of a carefully-worded letter responding to the patient’s concerns as the main output. The length of time to respond adequately to complaints was also too long in a number of the trusts, as was the simple lack of acknowledgement or apology where care was not provided to the appropriate standard. The review teams would much rather have seen evidence that trusts were actively seeking out and encouraging feedback.

3.4 Anne Clwyd MP: complaints report

Against the background of events at Stafford, and compounded by high-profile failures in maternity care at the Morecambe Bay Hospitals, public dissatisfaction with the NHS has continued to centre on its inability to respond quickly, sympathetically and decisively to patients’, relatives’ or carers’ expressions of complaint or concern over shortcomings in clinical or nursing care. Incensed by gross failures in the care of her dying husband at a different hospital, the MP, Anne Clwyd (Clwyd and Hart, 2013) publicised her own concerns for standards in the NHS and, receiving echoes in “hundreds of letters and e-mails from hundreds of people who were appalled at such a lapse in standards of basic decency and compassion … accounts of other shocking examples of poor care and the difficulty people encountered when trying to complain”, added further political momentum.

The report by Clwyd and Hart (2013) focused on four areas for change: improving the quality of care; improving the way concerns and complaints are handled; ensuring independence in the complaints procedures; and whistleblowing.

The picture painted is one where patients or their relatives are concerned that the proper level of care is not being given, sometimes by staff who are too thinly spread, and can find no one who is disposed, capable or free to respond thoughtfully and empathetically and intervene effectively at the time things are, or seem at risk of, going wrong. Alternatively, staff in the clinical environment are aware of problems but are inadequately supported to alleviate the immediate situation. Avoidable events occur, complaints are made, procedure runs its frustratingly long course, and nothing changes for the better because the organisation has learned nothing.

The triple themes in Clwyd’s terms of reference were: the capacity of NHS institutions to derive organisational learning from complaints and concerns; the operation of the NHS complaints procedure, after things have gone wrong; and the need to lessen, significantly, the professional and psychological distance between the concerned patient or relative and the staff member before fears are realised. Ancillary issues include the public accountability of provider boards for chronic service failures; and the greater display of nous and empathy by doctors and nurses, present in sufficient numbers to be accessible.

3.5 Professor Don Berwick: safety, sensitivity to patient experience, learning organisation

Invited “to make zero harm a reality in our NHS”, Berwick (2013) takes an approach which suggests that achievement is not a matter of hitting targets but one of constantly engaging in and appreciating the situation with fundamental goals in mind, and continually adjusting actions to be compatible with goals of promoting patients’ wellbeing and protecting them from harm.

While “Zero Harm” is a bold and worthy aspiration, the scientifically correct goal is “continual reduction”. All in the NHS should understand that safety is a continually emerging property, and that the battle for safety is never “won”; rather, it is always in progress.The Mid Staffordshire tragedy and wider quality defects in the NHS seem traceable in part to a loss of focus by at least some leaders on both excellent patient care and continual improvement as primary aims of the NHS (or to a misinterpretation by providers of the intent of leaders). In some organisations, in the place of the prime directive, “the needs of the patient come first”, goals of (a) hitting targets and (b) reducing costs have taken centre stage … where the central focus on patients falters, signals to staff, both at the front line and in regulatory and supervisory bodies, can become contaminated … Under such conditions organisations can hit the target, but miss the point.

Berwick turns our attention to the over-zealous pursuit of performance targets as cause of failure:

[…] regulation alone cannot solve the problems highlighted by Mid Staffordshire. Neither quality assurance nor continual improvement can be achieved through regulation based purely on technically specific standards, particularly where a blunt assertion is made that any breach in them is unacceptable.Time and again, we see the harvest of fear in the Mid Staffordshire story, a vicious cycle of over-riding goals, misallocation of resources, distracted attention, consequent failures and hazards, reproach for goals not met, more misallocation and growing opacity as dark rooms with no data came to look safer than ones with light. A symptom of this cycle is the gaming of data and goals; if the system is unable to be better, because its people lack the capacity or capability to improve, the aim becomes above all to look better, even when truth is the casualty.

Berwick stresses the need for the NHS and its institutions to become learning organisations, informed especially by the experiences of patients and clinical staff.

Quantitative targets and financial goals should not override protection of patients from harm … The best way to reduce harm is for the NHS to embrace wholeheartedly a culture of learning.Information on the deterioration of the quality of care at Mid-Staffordshire was abundant. It appeared in both narration (complaints from staff, carers and patients) and quantitative metrics (such as significantly high adjusted mortality rates compared with rates throughout England). Loud and urgent signals were muffled and explained away. Especially costly was the muffling of the voices of patients and carers who took the trouble to complain but whose complaints were too often ignored.Our most important recommendations for the way forward envision the NHS as a learning organisation, fully committed to the following:

* Placing the quality of patient care, especially patient safety, above all other aims.

* Engaging, empowering, and hearing patients and carers throughout the entire system and at all times.

* Fostering whole-heartedly the growth and development of all staff, including their ability and support to improve the processes in which they work.

* Embracing transparency unequivocally and everywhere, in the service of accountability, trust, and the growth of knowledge.

4. Commentary

The editors concur with Berwick in his view that: “The current NHS regulatory system is bewildering in its complexity and prone to both overlaps of remit and gaps between different agencies. It should be simplified.”

In our commentary, we seek to provide perspectives on the whole system.

4.1 The problem situation

Figure 1 presents a model of the nested environments whose interactions brought about the events which precipitated the Mid Staffordshire Hospital public inquiry and the events and issues it uncovered.

Figure 1 The Francis report in the context of four, nested conceptual environments

Figure 1 shows four such environments. Clinical practice takes place in the clinical environment situated within the managed environment of an NHS hospital or provider institution. The arm’s length purchaser/provider or commissioner/provider relationship establishes a regulatory and performance-managed environment from which various statutory bodies issue policy and demand information to assure them that policy is being operated. As well as clinical commissioning groups, these statutory bodies include the Care Quality Commission, the foundation trust regulator (Monitor), the National Institute for Clinical Excellence, which advises on effective clinical methods, and the General Medical and General Nursing Councils which regulate professional practice.

Within these environments sick patients are transformed into cured and satisfied patients but, regrettably, some suffer harm or inconvenience. Their negative experience feeds into the pool of public dissatisfaction which has fermented around Stafford and some of the UK’s other hospitals. Expressions of dissatisfaction incubate in the public and political environment, where personal, social and government responses are developed. A key role of the Francis report has been to initiate the resolution of a volatile and long-running situation in mid-Staffordshire. Public, political and institutional motives vary; there is a detectable appetite for individual recrimination which Francis assiduously avoided but the universal motivation is to eliminate the risk of recurrent, similar episodes.

Mid-Staffordshire reveals a clinical environment populated by a stressed and demoralised workforce, depleted by under-investment in the clinical front-line. The managed environment was dominated by a financial and target-driven culture rather than one driven by clinical-professional values. The performance-managed environment imposed the administrative burden of demonstrating that the wrong priorities could be met through a ruthless approach to producing the correct data returns to the regulator but did not successfully impose the policy burden of being a good hospital instead of just looking good enough to pass the limited and relatively superficial tests that were applied.

The Keogh approach represents a more thoroughgoing and resource-intensive approach to quality assessment. The relevant metaphor is an organisational laparotomy contrasted with a glance at the patient’s observation chart.

Involving patients and staff was the single most powerful aspect of the review process. Patients were key and equal members of review teams. Well-attended listening events at each trust provided us with a rich understanding about their experiences at the hospitals. Accessing patient insight in this way need not be complex, yet many of the trusts we reviewed did not have systematic processes for doing so, and all have actions in their action plan to improve in this area (Keogh, 2013).

The multidisciplinary nature of the review teams - involving patient and lay representatives, junior doctors, student nurses, senior clinicians and managers - was key to getting under the skin of these organisations. The review teams were not constrained by the limitations of a rigid set of tick box criteria. This allowed both cultural and technical assessments to be made, informed by listening to the views and experiences of staff, and particularly patients and members of the public (Keogh, 2013).

The dynamic for the Francis inquiry originated from the public and political environment. Indeed, Clwyd’s intervention, initiated by the politician as citizen, exemplifies the occasional unity of this environment. Concerns in Bristol had partial origins in the clinical environment and the response to clinical concerns from the managed environment was ineffective. The serious concerns of an anaesthetist about the performance of a senior cardiac surgeon were dismissed by the medical-managerial hierarchy in ways that current and supposedly widespread internal regulatory practice, at the interface between the clinical and managed environments, is intended to prevent through well-publicised and formal procedures to report and investigate performance concerns.

When these concerns were ignored in Bristol, the momentum of public outrage gathered in the same way that happened in mid-Staffordshire. What has “changed utterly” this time is that Keogh’s new regulatory practice can confront the managed environment on more than one front by giving more effective voice to the clinical and public environments simultaneously. In Keogh’s words:

[…] the public have now become not just informed participants in the process, but active assessors and regulators of the NHS. This represents a turning point for our health service from which there is no return (Keogh, 2013).

4.2 Who are the regulators?

There is a complex relationship between the regulators and the regulated (Figure 2).

Figure 2 The Regulators and the Regulated

Thus CQC and Monitor are regulators, with responsibilities for provider organisations. However, the CQC does not have a remit for regulating either commissioners or individual professionals. The Commissioners do not generally see themselves as regulators and yet apply quality standards to commissioning contracts and evaluation criteria to review performance. They are far better placed to detect issues earlier than the formal regulators.

Providers and the professionals responsible for care are even closer to the situation and can detect problems earlier. Those who are both regulated and de facto or potential regulators are sensitive to the culture in which they operate. In a culture of co-operation and collaboration, they will feel comfortable to raise concerns. In a more adversarial culture they will be more reluctant, letting problems deepen into crises.

However, the larger challenge is the fragmentation of both the regulators and the regulated. With services and regulation fragmented, there is a major danger of regulation leaving gaps in care and the quality of the care experienced being severely compromised.

Therefore the editors propose an alternative conceptual model (Figure 3) with the patient at the centre and with each successive organisational level of care communicating effectively both in the provision and evaluation of care. In this view, CQC becomes the inspector of the organisational units, with Monitor responsible for the coherence of service provision, communication and regulation.

Figure 3 Conceptual model for patient centred collaborative regulation

5. Conclusion

We doubt whether the dynamic for improvement must necessarily originate from the regulatory and performance-managed environment established in response to the Bristol crisis. There has been an apparently irrevocable shift towards a dynamic originating in the public and political environment following events in mid-Staffordshire. The impetus for improvement in quality and safety ought to originate in the clinical environment. Clinicians, generally, need to regain the confidence of patients and the public through renewed focus on clinical care and the elements necessary for its success in the case of each individual patient. Informed by the experiences of the recipients of their care, need the managed environment to support the proactive development of genuine, clinically-oriented goals as well as responding to external directives. With the many layers of oversight and communication which have been brought into being, it will be difficult for clinicians to emerge as leaders and seize the initiative for beneficial change unless structures can be established and supported, adequately to ensure and co-ordinate effective participation from the other conceptual layers of our conceptual model.

Nick Harrop and Alan Gillies

References

Berwick, D. (2013), A Promise to Learn – A Commitment to Act: Improving the Safety of Patients in England, Department of Health, London

Clwyd, A. and Hart, T. (2013), A Review of the NHS Hospitals Complaints System Putting Patients Back in the Picture, Department of Health, London

Care Quality Commission (2013), Inspection Report. Colchester General Hospital, November, 2013, The Care Quality Comission, London

Colin-Thomé, D. (2009), Mid Staffordshire NHS Foundation Trust: A Review of Lessons Learnt for Commissioners and Performance Managers Following the Healthcare Commission Investigation (29 April 2009)

Francis, R. (2013), Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry (Executive Summary), February 2013, The Stationery Office, London, ISBN: 9780102981476

Grant Thornton (2013), The Care Quality Commission re: Project Ambrose dated 14 June 2013, Grant Thornton LLP

Health Care Commission (2010), Independent Inquiry into Care Provided by Mid Staffordshire NHS Foundation Trust January 2005-March 2009, Volume 1, HC375-1 (24 Feb 2010), page 23, paragraph 75

Keogh, B. (2013), Review into the Quality of Care and Treatment Provided by 14 Hospital Trusts in England: Overview Report, NHS, England

Smith, R. (1998), "All changed, changed utterly", BMJ, Vol. 316 No. 7149, pp. 1917–1918

United Kingdom Department of Health (2012), Care Quality Commission Performance and Capability Review, Care Quality Commission, London

Walshe, K. and Phipps, D. (2013), Developing a Strategic Framework to Guide the Care Quality Commission’s Programme of Evaluation (Cm/01/13/04), The Stationery Office, London

Related articles