A South West county has joined a growing group of local authorities that are experimenting with the use of predictive algorithms in children’s social work.
Papers published in May revealed Somerset council is working with the Behavioural Insights Team (BIT), which was established under the coalition government to understand and influence citizens’ actions via the use of behavioural psychology, to trial machine learning.
“We have worked with [BIT] who have developed data algorithms that may help children’s social workers identify potential recurring cases at the point of assessment,” Somerset’s report said.
“Initial work indicates the algorithm to have a success rate of 95% and we are now looking at how this learning can be deployed in a tool to help social workers get it right and help children and young people with their lives,” it added.
Data concerns
Academics, local authority directors and others have raised concerns over the ethics of putting machine learning to work within children’s social care, in particular around how data is shared and used in future.
Algorithms’ expansion into children’s services was highlighted in Guardian reports during 2018, which named a number of councils – including Bristol, Thurrock, Hackney, Newham – as having developed or begun using a such system.
Of that list, Bristol has also been involved with BIT, commonly known as the ‘nudge unit’. The organisation was originally part of the Cabinet Office but was part-privatised in 2014 and is now co-owned by the charity Nesta, a partner in the children’s social care What Works Centre.
The What Works Centre, whose executive director Michael Sanders used to work for BIT, is working separately with a group of councils to evaluate whether predictive analytics are useful – and acceptable – tools for social workers.
The study is backed by Anne Longfield, the children’s commissioner for England, who has said she is keen to see whether better use of data can help hard-pressed services spot earlier which children might need help.
In an interview with Community Care last year, Hackney council explained how it was using predictive tools to identify families in the borough where it felt there could be future need.
‘Saving time and reducing re-referrals’
In Somerset, a council spokesperson told Community Care the aim of its work with BIT was to support children’s social workers, to ensure key support is provided to families and children at an earlier stage.
“This has the potential to save on time and reduce re-referrals,” the spokesperson said, adding that the system had not yet been used by social workers but had used “real data” and could go live this year.
“Work is ongoing on exploring how the technology can be incorporated into social workers’ recording process to enable the team to use the data effectively,” the spokesperson added.
Somerset has drawn national attention recently thanks to a Panorama documentary focusing on its adult social care staff’s efforts to deliver services against a backdrop of austerity-stretched finances.
Last year a cabinet report warned children’s services in the county risked overspending by £15 million, potentially making the council financially unviable, though it has since taken measures to improve the situation. Earlier in 2019 the county, which like many others faces staffing problems in children’s services, opted to suspend attempts to recruit to around 20 hard-to-fill social work posts.
Somerset’s spokesperson said the authority was “always open to exploring new ways of making the incredibly difficult job children’s social workers do easier by utilising new technologies”.
‘Predicting which referrals are likely to escalate’
James Lawrence, BIT’s head of quantitative research, said the organisation was working to investigate the feasibility of using machine learning within children’s services.
Specifically, BIT is looking at “whether it’s possible to predict which referrals are likely to result in an escalation to child in need, child protection plan, or looked-after child [status],” Lawrence said.
Lawrence said the research, which included an aborted scheme at Westminster as well as the partnerships with Somerset, Bristol and Essex, was at an early stage and had not been used in a “live” environment.
“Early results suggest machine learning is able to spot certain patterns in social workers’ case notes which are indicative that a case is likely to escalate,” Lawrence said.
He did not comment further as to whether other councils were working with BIT, which he said respected its partners’ wishes around confidentiality.
Meanwhile a spokesperson for the What Works Centre said its own machine-learning study partners would be announced over the next few weeks once agreements had been finalised.
If you work at a council using machine learning in children’s social work and would like to speak confidentially to a Community Care journalist about its impact on practice, please contact Alex Turner via email or on Twitter.
We need to exercise great caution over phrases like “Initial work indicates the algorithm to have a success rate of 95%”. No evidence is provided to back up that figure, only a link to an agenda document that provides the same figure, but with no evidence.
What constitutes success? How is success measured? How was the figure of 95% arrived at?
I am not against tools to improve the usefulness of our assessments, but I am concerned about how focusing on tools can nullify what the tool is designed to do.
I would like to introduce the world to my own personal law, Allenby’s Law – There is no policy, practice, or procedure so inherently effective that a large organisation can’t render it completely ineffective by the way it implements it. Let’s hope that doesn’t apply in this case.
Spot on
I used to joke about this and now it’s happening. Who gets blamed when the computer says a child is safe and the child dies?
I also would urge some caution to this, given the inherant dilemmas of statistical significance being somewhat challenging to quantify. What I would also be interested to know would be how statistical significance is calculated and where risks are identified. This would need to be a transparent process and not buried somewhere in the back rooms of the BIT office with patents that restrict it’s exposure to scrutiny.
i am a robot