Using algorithms in children’s social care: experts call for better understanding of risks and benefits

Calls to explore concerns around bias, improve understanding of terms and involve families in the debate, as borough drops pilot of profiling system to identify at-risk children

Image of data (credit: Pablo Lagarto / Adobe Stock)
(credit: Pablo Lagarto / Adobe Stock)

In a country where removing children from their parents is perhaps the most severe intervention the state can make in relation to its citizens, it’s no wonder that involving computerised decision-making in the process causes disquiet.

Over recent years there have been regular stories in the national media raising alarms over resource-starved councils plugging citizens’ data into systems – including within children’s services – that seek to predict futures, and advise professionals what to do next.

Just a few weeks ago, one of the schemes that has attracted most controversy – Hackney council’s Early Help Profiling System (EHPS), commissioned from private provider Xantura  – was dropped after it did not “realise the expected benefits”.

The grant-funded pilot project, profiled last year by Community Care, was intended to help social workers identify children most at risk of harm. Hackney did not respond to an interview request for this piece but, according to a council statement, problems with data quality meant further investment could not be justified.

But elsewhere, though exact numbers are uncertain, data-driven experiments continue (see box). They now include a project by What Works for Children’s Social Care, which is both bench-testing predictive analytics models and conducting a broader review of ethics around the use of such technology by social workers.

With the latter initiative due to report early in 2020, we checked in with experts from various sides of the debate as to where we are, what their fears are and what needs to happen next.

Despite their different perspectives, there was near-unanimous agreement that more understanding – around terminology and use cases, as well as issues of consent and co-production with families – is needed within the sector, if algorithms are to become an acceptable and beneficial part of the environment.

‘Mission creep’

Around the country, councils have put algorithms to use in a variety of ways within children’s services – with many of the strongest concerns centring on their use in relation to individual children and families’ situations.

One issue, says University of Oxford academic Lisa Holmes, who is co-leading the What Works ethics review, is that a number of councils’ initiatives have originated from risk-focused work such as the Troubled Families programme.

This started in 2012 and incentivised councils, with a payment-by-results approach, to identify families in their area with multiple problems and “turn them around”, with the intention of preventing costlier interventions further down the line. Its leader, Louise Casey, described Troubled Families that year as “not some cuddly social workers’ programme to wrap everybody in cotton wool”.

‘We are trying to go upstream’

Bristol council’s use of data, drawn from a range of agencies and fed into an ‘analytics hub’, to flag individuals and to inform resource allocation has attracted considerable attention and been featured in national media stories and academic reports. The council has encouraged this, arguing that the more the spotlight is on its practices, the less citizens will feel fearful of them.

The analytics hub was developed in-house and is used both by the council and police. It builds profiles of individuals based on events that have happened to them and to others – such as being absent from school, going missing or being a victim of crime – which then populate lists highlighting risks, for instance of being criminally or sexually exploited.

Should a young person appear near the top of such a list, any lead professional on the system would receive one email suggesting they review their support plan. Staff in Bristol children’s services’ front door and early help teams also make regular real-time use of the system in order to assess and reshape multi-agency responses.

“[Professionals’ involvement is often] borne out of crisis, which causes people to look at young people in a holistic way,” says Gary Davies, the head of early intervention and targeted services for children and families. “We are trying to go upstream and see if we can work things out earlier, before social issues become so entrenched you cannot do much about them.” He adds that the system is not designed to make decisions but to refresh “corporate memory” and provoke discussions that result in humans coming up with solutions.

The hub also conducts strategic analysis, for instance at school or neighbourhood level. “From analysing information, we have just injected £800,000 of resources into south Bristol, as we were able to show demand across a multitude of levels had increased,” Davies says. “It means more social workers and family support workers.”

“For me it feels like lots of work has been taken forward from a risk perspective and less so around thinking about, [whether there are] ways we can better understand our data using predictive analytics at an aggregate [rather than individual] level, to understand what services best meet the outcomes and needs of children and families,” says Holmes, director of the university’s Rees Centre, which focuses on children’s social care research.

“There has been a mission creep – by the very nature of how Troubled Families was set up, it lent itself to some of those predictive analytics to be able to identify families and do that work.”

Holmes adds that, as studies such as the 2018 Care Crisis Review have highlighted, children’s social care is multi-faceted, with “families operating with a range of risk and protective factors”. This means that delivering a meaningful picture capturing the nuances of people’s circumstances requires huge, complex datasets.

Even setting aside ethical concerns – which she reiterates are “absolutely fundamental” – Holmes says there are basic questions around the quality of local authorities’ data that need to be confidently answered before models are built.

“It’s that old adage about rubbish in, rubbish out,” she says.

‘Institutionalising risk aversion’

As part of the What Works ethics review, a roundtable of sector experts with conflicting opinions around the use of data analytics in social work was convened earlier this year to discuss the issues.

One of the critical voices who attended, Family Rights Group chief executive Cathy Ashley, says she has particular concerns about applications that analyse case note text and assist social worker decisions into whether to close or step down files.

“We are all influenced by our individual and professional experiences and by society, including potential racial and class biases, and so on,” she says.

“The ideal is that you get the algorithms and they strip out the prejudices – but case files can reflect much of the inherent subjectivity of those filling them. You are in danger of reinforcing, of the machine itself picking up on, those.”

An even more worrying factor, in Ashley’s view, is the risk-averse culture common across the children’s safeguarding ecosystem.

“[Say] you as a social worker had a judgment that X child should stay with their mother, who is suffering with domestic abuse, and your predictive analytics bring up an amber warning,” Ashley says. “In the climate we are in, you are going to go with the least risky option, because you’ll be worried that, if something goes wrong, it will be on you.

“You have, therefore, effectively institutionalised risk aversion into the system,” Ashley says.

‘Sceptical of the benefits’

The type of predictive analytics model discussed by Ashley is the focus of the other strand of the What Works examination of the impact of algorithms within children’s services. The organisation has recruited six councils – which it declines to name, but confirms are a diverse mix – to try to benchmark whether such predictive analytics are actually useful.

In his previous life, What Works executive director Michael Sanders worked at former government unit the Behavioural Insights Team (BIT). BIT worked with a council to use analysis of  language to predict which cases flagged for no further action would return within three months and result in either a child protection plan or a child being taken into care – research it continues to explore with several local authorities, including Somerset.

But Sanders says he “neither loves nor hates machine learning” and has become “sceptical of the benefits”.

“The first part of [commercial firms’ argument] is, ‘We can predict the future better than a social worker’, or better than traditional analytical techniques, using machine learning,” Sanders says. “It’s not super clear that is true, and even if is, it’s not a binary statement – how much better?

“There needs to be an open, transparent debate about this, which means having someone who does research to find those answers – what are those numbers? – publish that, and start a conversation,” he continues.

Sanders says he is “not surprised” at the apparent failure of Hackney’s pilot scheme. “Even if machine learning is to be effective, we are in its infancy, and lots of times it will not work and we should be open and honest about that.”

He adds that building his own team’s algorithms has been a time-consuming process, mostly taking place within individual councils’ premises and involving bespoke designs for each.

“[You can’t take] a model you run in a large rural county where almost everyone is white, and try to apply it in an inner-London borough, where there is great diversity in ethnicity and economic characteristics and issues may be very different,” Sanders says. He adds that the idea you could apply one to the other in an off-the-peg solution, which he has heard some people advance, is “maddeningly idiotic from my perspective”.

Sinister terminology

While the nitty-gritty of the What Works models will not produce an outcome until an unspecified date during 2020, Sanders says it is high time for more dialogue around the wider role of data analytics in children’s services.

Even if [the technology] works really well, that doesn’t mean we should do it,” he says.

Language barriers

While the intersection of predictive analytics and children’s social work has gained plenty of attention in the sector over the past two years, Claudia Megele, the national chair of the Principal Children and Families Social Worker (PCFSW) network, notes that there is “conflicting data” about who is doing what.

“This may be partly due to misunderstanding or lack of distinction between algorithms and machine learning,” says Megele who, through the PCFSW network, is currently researching social workers’ ‘digital professionalism’.

At a basic level, an algorithm is simply an automated instruction, which may comprise a series of ‘if-then’ statements or mathematical equations. Machine learning – a subset of artificial intelligence – is built from algorithms, but involves applications being able to independently ‘learn’ from new data they are exposed to.

“Discussions about application of AI and technology in practice are often split between those who are enthusiastic and those who are contrary,” says Megele. “There are several reasons for this, ranging from unfamiliarity with how the technology works to its fast-changing nature and extensive impact on practice and on people’s lives.

“We need to go beyond the for and against arguments and engage in an evidence-informed, critical and constructive debate to examine the specific application – be it machine learning or rule-based algorithm – and its implications and associated risks and benefits.”

A key starting point for discussion, some argue, needs to be around nailing down the terminology (see box), some of which – such as ‘machine learning’ or ‘artificial intelligence’ – automatically takes on a sinister note to many people’s ears.

“In its most basic sense, ‘artificial intelligence’ just means that you have certain technologies that are serving surrogate cognitive functions, standing in for little bits of human reasoning, especially evidence-based reasoning,” says David Leslie, an ethics expert at the Alan Turing Institute, who is partnering with Holmes on the What Works ethics review.

“That is fine, it’s the common way we use it,” continues Leslie. “But you get into hot water when we move from there to more anthropomorphic, Hollywood versions of what AI is.” For many people, that is their “bare-bones understanding”.

Ashley agrees on the need to find common terms of reference. Despite her misgivings around how some local authorities are using information, she notes that many respected studies within children’s social care have stemmed from large datasets that pull together complex information from multiple sources.

“For me the area of potential reflection that has its place is where you’re looking at big data – could be census data, could be data linkage, though there are [still ethics debates to be had],” she says. “At the other end you’ve got individual predictive analytics, which are determinist and I think damaging and dangerous. Part of the problem is this discussion has tended to mix all that up.”

Widening the scope

The other crucial factor most people mention is the need to broaden the participants in debates. Ashley says she came away from the roundtable concerned that not only commercial tech firms, but some local authorities, appeared to have given no more than lip service to ethical issues around consent and how data is used.

Calum Webb, an academic at the University of Sheffield, tells Community Care about a workshop he recently ran that brought together people including frontline social workers and people with lived experience of statutory services to talk through the issues.

“People at commissioning level and tech companies are like, we don’t really want to ask because they will say, no we shouldn’t be doing this,” he says. “Actually getting them in the room, they had some interesting views.

“People with experience of social work processes [asked], what if instead of trying to identify deficiencies, we tried to identify challenges families have faced and strengths associated with that,” he adds.

Leslie also believes there can be a positive future for data science within children’s services, and expresses hope that the What Works ethics review can lead to expanded levels of participation, and a widening of scope, in considering possible use cases. He says it is important to remember the context of the last decade’s policymaking, when considering the ways in which digital data has been harnessed so far.

“In social care the human component, of human interaction, of taking care of vulnerable people, using social injustice as a starting point, caring for others as a priority – all of these things put children and families at the centre, and so too should any machine learning application,” he says.

“When you consider the legacy of austerity, it was not putting humans in the centre – it’s more about efficiency – and that is a pressure point we need to pay attention to,” he adds. “Do this right, and the human, and the care, have to remain in pole position.”

More from Community Care

Comments are closed.