Taking over the world or scratching at its surface?

,

Journal of Children's Services

ISSN: 1746-6660

Article publication date: 9 December 2011

410

Citation

Little, M. and Axford, N. (2011), "Taking over the world or scratching at its surface?", Journal of Children's Services, Vol. 6 No. 4. https://doi.org/10.1108/jcs.2011.55206daa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


Taking over the world or scratching at its surface?

Article Type: Editorial From: Journal of Children’s Services, Volume 6, Issue 4

We give over the Editorial to this edition of the journal to a reflection on the article by Sarah Stewart-Brown and her colleagues. Their focus is partly on randomised controlled trials (RCTs) and partly on the consequences of what they see as the privileged status of this method for children’s services.

The theme has to an extent become a hardy annual that matters to a small number of academics – including us – and an even smaller number of policy makers and practitioners. But increasingly, the wider relevance of the debate for policy and practice is being recognised.

At the beginning of their article Stewart-Brown and her colleagues refer to the team that undertook the research for the recently published Allen review on early intervention (Allen, 2011). That team was the Social Research Unit at Dartington, of which we are part, and throughout 2012 we shall be publishing new standards of evidence and lists of programmes independently vetted against those standards by leading scientists around the world, all of which arise out of work we did for the Allen review and, prior to his review for Government, for the Greater London Authority.

The article says much of value but it was the comment that “cash-strapped local and health authorities, who are now cutting funding for, or refusing to invest in, programmes that do not appear on kite-marked or recommended lists” (p. 229) that particularly caught our eye.

Since the changes we have been advocating in children’s services are significant, and have to a small extent been taken up by Graham Allen MP in his review and, prior to that, by philanthropy, some governments in the US and EU and a handful of local authorities in England, we felt it timely to open up the debate.

The Stewart-Brown article will begin what we hope will be a longer and fuller conversation. In the next-but-one edition of the journal (7.2, Summer 2012) we will publish responses from two researchers who take a different stance on some of the points made, together with a rejoinder from the authors of the original article. In 2012, we shall also hold an open seminar on the issue seeking some consensus among the competing perspectives that exist in the scientific and policy communities.

Part of that exchange will be about science. Hopefully, few people will disagree with the proposition that a badly conducted RCT is at best worthless and at worst misleading, much the same as would be the case for badly conducted epidemiological studies, ethnographical research or indeed any method.

Sarah Stewart-Brown and colleagues list well some, but by no means all, of the things that can go wrong in an RCT. These errors must be avoided, and it is for this reason that the standards of evidence prepared by the Social Research Unit with support from an international group of scientists specify a range of criteria that must be met before any evaluation method, be it an RCT, a quasi-experimental design or a regression discontinuity design, can be counted as giving a reliable indication of the impact of an intervention. (Incidentally, an RCT is not a necessary condition in the Standards prepared by the Social Research Unit.)

Hopefully few reasonable scientists will disagree with the assertion in the article in this edition that RCTs should not have primacy over other methods. The essential ethic is to find a method that matches the questions being asked and the hypotheses being tested. If the question is about the impact of an intervention, be it preventative or early or late in the development of an impairment to child health or development, then RCTs, done well, tend to have much utility since they are an efficient way of reducing selection effects and knowing whether the effect is the product of the intervention and not, for instance, a normal trend in child development.

But if the question is about why an intervention is working, then qualitative research has a strong role to play. For a question about implementation quality an RCT would arguably be a waste of time, and understanding how programmes proven to work in a trial perform at scale also requires different methods.

It is interesting to us that while our standards of evidence have over 30 criteria spread across four dimensions, it is the part that talks about RCTs that attracts the largest anti-body reaction. RCTs have a role to play but they are not the focal point.

In our work to produce the daily online newspaper Prevention Action we have had the privilege of interviewing many of the developers of the evidence-based programmes that are selected using the Social Research Unit standards. Apart from the fact that most of these developers are themselves practitioners, what stands out is the multi-method approach they have taken to developing their interventions, including observational studies to work out why an impairment to health and development might occur, epidemiology to test the relationships between risk and protective factors and child outcomes, and implementation evaluations to examine whether people will use the emerging intervention as intended. The RCTs – and the best programmes generally have several – come later, addressing the question of impact and, in a well-designed trial, the mediating factors.

But from our perspective, science is just one half of the equation. One can look at this issue from an ethical or social policy perspective. For example, do the generally impoverished children and families with whom child protection agencies intervene deserve to know whether the experience will leave them better or worse off? Is this a human right? In some cases possibly not: one might argue that in the context of extreme risk the state has a right to take action even when that action has negative consequences. But what about routine processes such as family group conferences, for which there are one RCT and one quasi-experimental study? These evaluations produce far-from-conclusive results but suggest no impact on child maltreatment (Berzin, 2006) and even a possible increase in re-referrals to child protection for abuse (Sundell and Vinnerljung, 2004). If it were our children or families being asked to join a family group conference we would be demanding much more evidence of efficacy.

Or, taking an economic perspective, with the estimated £67 billion of state expenditure on children in the UK being cut back as a result of the extended economic downturn, should we not be applying a little more rigour to find out where cuts could be made without any impact on the health and development of children, or to examine where scaling up proven models might produce efficiencies that can help protect vital parts of the children’s services budget? RCTs have an important if not exclusive part to play here.

The discussion of money brings us to the question of the scope of the changes we have proposed via the Allen review and other initiatives in the UK, elsewhere in Europe and in the USA.

Stewart-Brown and her colleagues say that cash-strapped local authorities may use the lists of interventions kite-marked by standards such as the ones we developed to cut funding. If the standards were the sole arbiter of investment by children’s services, 99.9 per cent of the £67 billion could be returned to a grateful Chancellor.

Change is threatening and threats exaggerate its significance. Critics of the use of a higher standard of evidence give the impression that it might take over the world when by our reckoning we are only scratching at its surface. In five years time, supposing the proposals we and others like Allen have made really make a mark, we can only dream of 1 per cent or perhaps 2 per cent of expenditure on children and families going to evidence-based programmes.

A reasonable proposition

Our aspirations are modest. We would like to see more evidence and a higher standard of evidence underpinning publicly funded work on behalf of children.

Modest aspirations fit with the uncertainty about the underlying proposition. We feel sure that in trial conditions, and to a lesser extent in real world conditions, evidence-based programmes can have a benefit for children’s health and development, and that it is reasonable to predict that the benefit can be translated into economic returns for central and local government.

What is being proposed via the Allen review is that a portfolio of evidence-based programmes delivered at scale – that is, reaching every eligible child (meaning every child in the case of universal prevention programmes) in a single local authority – will produce a positive contagion. This will lead to exponential effects on children’s health and development and the kinds of outputs that matter to public systems, such as reduced referrals to child protection, foster care, youth justice, special educational needs and child and adolescent mental health services.

This seems, given the evidence base, to be a reasonable proposition. But we do not know if it will work in practice. It would be unreasonable to wager a big chunk of public expenditure on the idea. But 1 per cent of local authority spend seems like a good bet, given the potential returns.

We need to test this idea and find out. RCTs can play a role in helping to decide how to allocate the 1 per cent. Ironically, they will likely have no use in discovering whether the proposition is proven since the method fits awkwardly with the evaluation of scaled interventions.

We hope readers will read the article by Sarah Stewart-Brown and her colleagues carefully, and will join in the debate we hope to sponsor in 2012.

Michael Little, Nick Axford

References

Allen, G. (2011), Early Intervention: The Next Steps – An Independent Report to Her Majesty’s Government, Cabinet Office, London

Berzin, S.C. (2006), “Using sibling data to understand the impact of family group decision-making on child welfare outcomes”, Children and Youth Services Review, Vol. 28, pp. 1449–58

Sundell, K. and Vinnerljung, B. (2004), “Outcomes of family group conferencing in Sweden: a three-year follow-up”, Child Abuse & Neglect, Vol. 28, pp. 267–87

Related articles