National standards for machine learning in social care needed to protect against misuse, urges review

What Works study questions 'moral justifiability' of predictive risk modelling and calls for rethink of how data science is put to use by social workers

Digital image of brain and binary data flow signifying machine learning (credit: Elnur / Adobe Stock)
(credit: Elnur / Adobe Stock)

National standards on the responsible use of machine learning in children’s social care should be introduced to protect against misuse, a study has recommended.

The ethics review, conducted by What Works for Children’s Social Care and academic partners, warned of “legitimate hesitation with regard to the moral justifiability” of predictive risk modelling, through which which some councils have used data on factors such as poverty to identify children at risk.

Increasing concerns have been raised in recent years around the use of such models, including around their potential to discriminate against poorer families, depersonalise relationships with service users and generate errors, particuarly in a context of rising demand and shrinking resources.

The review, which drew on existing literature and the perspectives of service users and sector experts, said more research was needed around whether and how children’s social care’s current environment could support the ethical use of machine learning tools that directly affected individuals.

Key recommendations

  • The responsible design and use of machine learning models in children’s social care should be mandated via national standards.
  • Open communication between social workers and data scientists across local authorities should be encouraged to improve the national knowledge base on machine learning learning in social care.
  • Local authorities that develop machine learning applications should engage with citizens to gain consent for their use.
  • The use of data science should be refocused away from individual risk and towards exploring “deeper social-structural problems” driving rising social care demand, and also be focused on promoting better outcomes for families and strengths-based approaches.

But by “redirecting the energies” of data science towards analysing the root causes behind the need for social work interventions, and exploring how positive as well as negative measures can be integrated into analytics, machine learning could potentially become better aligned with social work values, the study report said.

People likely to be affected by machine-learning systems should be consensually involved in their development, it added.

‘Take a step back’

Speaking to Community Care ahead of the review’s launch yesterday, Michael Sanders, the executive director at What Works, said it would be “wrong to pass judgment” on children’s services already experimenting with machine learning.

But he added that “local authorities and social workers want to be doing their jobs in the most ethical way possible”.

“Hopefully this [research] helps them, when they’re existing in a messy and complicated environment, to take a step back and consider – are we doing this thing we want to do as ethically as we can, and under some circumstances, is it worth doing it, because it’s not ethical enough?” Sanders said.

Something that’s unethical can’t be good social work.”

What is machine learning?

The debate around the interface between technology and children’s social care has seen some mixing and matching of terms such as ‘machine learning’, ‘artificial intelligence’ or simply ‘algorithms’.

While an algorithm, at a simple level, is simply an automated instruction, machine learning involves applications being able to independently ‘learn’ from new data they are exposed to, without being fully preprogrammed by designers. Most machine learning models within children’s social care are classed by the What Works review as involving ‘supervised learning’, with “numerous examples [being] used to train an algorithm to map input variables onto desired outcomes”.

Based on these examples, the ML model ‘learns’ to identify patterns that link inputs to outputs. This has led to disquiet when the approach is combined with predictive analytics, which identify possible future outcomes on the basis of inputs and estimate the likelihood of such outcomes.

David Leslie, an ethics expert at the Alan Turing Institute, who led the study with the University of Oxford’s Lisa Holmes, said the review was “aiming to set the bar of best practice” and that there remained “a stretch and a distance that needs to be covered, in terms of the various components of readiness and resource allocation,” he said.

“I think that shouldn’t be prohibitive of ways in which data science can be used as a tool to improve the social care environment,” added Leslie, who last year told Community Care that he hoped children and families’ experience, and a social justice perspective, could be placed at the centre of machine learning applications in the future.

But Claudia Megele, chair of the Principal Children & Families Social Worker (PCFSW) Network, who also spoke at the review launch, stressed that the time was not yet right for their introduction, despite the technical ability being there.

“Machine learning can result in encoding the current challenge, issues or inadequacies that are the result of austerity and its impact on practice,” she said. “This obviously would be detrimental to services and people who access those services.”

Megele added that machine learning was inherently less suited to social work than to medicine’s diagnostic model. “The processes of identifying and managing risks in social work are much more challenging,” she said. “As a result decisions are more complex and can have significant and last impact on people’s lives; therefore, they require professional judgement.”

‘Algorithmic reproduction of social injustice’ risk

The new review sought to question whether machine learning should be used in the children’s social care sector at all – and concluded that common ethical values between the two fields, including the need to be objective, impartial and use evidence-based judgments, offered a theoretical way forward.

But participants in workshops that formed part of the study raised concerns about the real-world use of predictive analytics being fuelled by the need to find efficient ways of working against a backdrop of cuts.

The report concluded that systems that devalued person-centred approaches to children’s social care were “not ethically permissible”.

Researchers also raised concerns that information fed into machine learning systems would inevitably reinforce correlations between deprivation and being from disadvantaged ethnic groups, and involvement with children’s services.

“In this connection, the major problem that arises regarding the use of predictive analytics in children’s social care is that the features that are indicative of social disadvantage and deprivation, and that are simultaneously linked to socioeconomic and racial discrimination, are also highly predictive of the adverse outcomes used to measure child maltreatment and neglect,” the report said.

It said approaches that inferred a causal link between disadvantage and risk and attributed this to individuals “should be scrutinised accordingly and handled with extreme interpretive caution and care”.

‘Exceptionally demanding’ task

Besides big-picture considerations as to whether machine learning can be justifiable in children’s social care, the review also discussed the practical challenges around responsibly setting up systems.

The report said it was “exceptionally demanding” to ensure good data quality and model design, effective testing and validation and sufficient staff training to implement them appropriately, and all of these features were needed to make systems ethically jusifiable.

Some of the most crucial concerns surrounded the quality of data held by local authorities, including how representative, relevant, recent and accurate it was.

Data used for predictive risk models stem were based on the past provision of public services in which certain groups were under- or over-represented, creating a sampling bias. It said there was “no clear-cut technical solution to create optimally representative datasets”.

The validity of data was also time-sensitive due to the impact of contextual changes, including to laws and procedures, as well as shifts in thresholds due to demand, resource availability or inspection results.

Contested information

Researchers also noted the potential impacts of human error, bias and the ways in which electronic case management systems can “oversimplify situations”.

Meanwhile workshop participants highlighted issues around contested information being included in records – and around the influence of incentives on data collection, including by payment-by-results initiatives such as the Troubled Families programme, from which some councils’ children’s social care machine learning has derived.

Megele said it was “well known” that the data held by most local authorities could not provide a reliable and appropriate pool for machine learning.

“Even if we had the required data quality there are many other questions and challenges that need to be addressed,” she said. “For example, who and how will we select the “right” data, who will drive the design and implementation of such systems and what is the level of transparency and scrutiny involved? All these questions pose significant difficulties in achieving a machine learning system that is ‘fair’, ethical and effective for children’s social care.”

 

More from Community Care

Comments are closed.