Be careful what you measure in child protection because it influences practice

The current system of measuring success is flawed, argues Dr Rick Hood, senior lecturer in social work at Kingston University and St George’s

By Dr Rick Hood

Child protection, a complex and politically sensitive area of practice, has long had an issue with performance.

Its public reputation has been scarred by hostile media coverage of deaths from child abuse, while Ofsted inspections recently found almost three-quarters of local authority children’s social care services to be inadequate or requiring improvement.

Dr Rick Hood will be speaking on a panel discussion on ‘Inspection and the measurement of services’ at Community Care Live
Find out more

There is understandable pressure that such a contentious public service should be transparent and accountable to the people who use and pay for it.

But what are the measures to reliably gauge the quality of a service? Particularly where the protection of children is concerned?

Part of the problem is that performance is an ambiguous concept that combines ideas of functionality, comparability and compliance.

Contradictory results

This means that different measures of performance sometimes produce contradictory results. International statistics on deaths from child abuse, for example, show that the UK system functions relatively well, whereas Ofsted’s latest annual report into children’s social care describes a sector struggling to cope with the pressure on frontline services.

Child protection workers are in contact with some of the most vulnerable children in our society. Yet there are very few ways of tracking outcomes for these children after they receive a service.

Instead, what tends to be measured is the quality of the process: how long an assessment takes; how long a plan lasts for; how soon after a strategy meeting a conference takes place.

Measuring failure

Process indicators should ideally have a clear connection to the benefits of intervention. Yet the only outcomes indicators that are collected routinely – such as rates of re-referrals within 12 months of case closure – measure failure rather than success.

Even then, there is no evidence that adherence to targets for work completion improves outcomes as measured by re-referrals.

Indeed, the influential Munro Review of child protection took the view that such targets had a distorting effect on practice and recommended that they should be scrapped.

Nonetheless, timescales remain a prominent feature of performance management in child protection. In part, this reflects a longstanding focus on internal audit and quality assurance, reinforced in recent years by electronic workflow systems that allow this type of performance data to be easily captured.

Lack of empirical evidence

Another contributing factor may be the lack of evidence-based process measures, which would seek to align patterns of intervention with empirical evidence of effectiveness in certain contexts. This issue points to another key characteristic of child protection, namely its association with the social work profession, which has in the past struggled to establish its ‘scientific’ credentials.

Performance indicators are neither neutral nor objective; they also shape and influence the services they purport to serve.

There is an obvious pitfall here. Research shows that measures only work properly when they derive from the purpose of a service. Once services begin to be designed around their measures, it results in waste and inefficiency.

In this respect, it is worth noting that statutory agencies routinely subject families to multiple assessments and transfers between different services and workers.

Purpose and design of services

So while individual pieces of work might be done in a timely and professional manner, from the perspective of families, the overall service may be experienced as fragmented, time-consuming and repetitive.

Deciding what to measure in child protection involves thinking deeply about the purpose and design of services. Simplistic and polemical debates about those services may suit the aims of politicians and the media, but are ultimately to the detriment of the children and families who depend on them.

Dr Rick Hood is a senior lecturer in social work at Kingston University and St George’s, University of London. He will be speaking on a debate about inspection and the measurement of services at Community Care Live, Birmingham on the 10th and 11th of May.

More from Community Care

One Response to Be careful what you measure in child protection because it influences practice

  1. Peter Worthington March 12, 2016 at 11:40 am #

    If there is an issue with child protection performance then it should be how best can we evidence, maintain and celebrate good practice and improve upon areas of poor performance.

    Some measures may appear contradictory if not properly understood and you need to know which complementary measures to consider together. Rarely should any measure be considered in isolation as an indicator of success or failure. An effective basket of measures should inform and guide. Unfortunately sensational headlines tend to avoid the more balanced perspective that should be taken.

    Timeliness is important, as are quality, volume and outcomes. It may be true that there are not many ways of directly tracking outcomes for children that have been subject to child protection interventions after they receive a service, perhaps because one indication of success is that we are no longer directly involved with children. So perhaps we should also look for indications of positive outcomes in the wider population.

    I agree that measures will only work properly when they derive from the purpose of a service. Establishing the right measures and using them to bring about positive outcomes is the essence of good performance management and an essential component of effective child protection.