AI in social work: opportunity or risk?

With artificial intelligence an increasing presence in social work, the sector is wrestling with the challenge of reaping the benefits while managing the potential costs

Woman typing prompts into an AI powered chat system on her computer
Photo: DC Studio/Adobe Stock

By Jo Stephenson and Mithran Samuel

Like every other public service or economic sector, social work is getting to grips with the impact of artificial intelligence (AI) on how it operates.

In particular, the use of generative AI – tools that create content based on data they have been trained on, in response to prompts – is reshaping the way practitioners work with children, families and adults.

A poll by Community Care in October 2024 found more than one in five (21%) of the 713 social workers who took part had used AI tools such as Microsoft’s CoPilot or Beam’s Magic Notes for daily social work tasks.

The proportion is likely to have increased since then, as more and more councils implement AI systems in their adults’ and children’s services.

It is a shift that brings with it great opportunity for the sector.

Learn more about AI and social work

To explore what we know and equip practitioners, educators, leaders and policymakers to manage the opportunities and risks that AI presents, Community Care has gathered a group of experts for a half-day online event taking place on 9 July 2025.

Sessions will cover social workers’ experience of artificial intelligence, the role of AI in adults’ and children’s services, its use in social work education and how practitioners should manage the ethical dilemmas the technology presents. Speakers include Dr Tarsem Singh Cooner, who is featured in this article.

Book before 28 May 2025 to take advantage of our early bird rate of £95 plus VAT.

According to the Local Government Association (LGA), AI “has the potential to bring numerous benefits to local government”.

“This may include improved efficiency, cost savings, enhanced decision-making, and better resident services.”

Time-saving potential of technology

As part of an AI hub, the LGA has compiled a bank of case studies, many of which relate to adults’ or children’s social care.

Several of these tools carry out social work-related tasks, including transcribing and summarising assessments of people who use services, or other meetings, and generating actions for these, and retrieving information from case management systems.

These tools offer practitioners the potential to reduce time spent on administration and spend more on the direct work that many see as the key way in which they change lives.

This was something that prime minister Keir Starmer himself highlighted in a speech in January on the government’s plans to accelerate the rollout of AI technologies across public services.

Keir Starmer meets Ealing council staff and Beam CEO Alex Stephany at Downing Street

Keir Starmer meets Ealing council staff and Beam chief executive Alex Stephany at an event on AI at Downing Street (photo: Simon Dawson / No 10 Downing Street)

However, at the same time, there are serious concerns about what the rollout of AI – in particular the generative form – will mean for a profession as human-centred and relationship-based as social work.

Data and privacy concerns

One is the risks to the data, privacy and confidentiality of the people social workers work with. Such risks are particularly pronounced when using generic tools not designed for the sector, says Amanda Taylor-Beswick, professor of digital and social sciences at the University of Cumbria.

“At the moment, we’re using technologies from all sorts of places and can’t be sure what’s happening with the data we create through our interactions on those platforms,” she says.

She highlights the need to build and procure fit-for-purpose products designed to handle sensitive information, “so the data you’re generating is not then filtered out to other places or sold on for different purposes”.

The risks to the privacy of the people social workers support were also highlighted in practice guidance on AI produced earlier this year by the British Association of Social Workers (BASW).

BASW advised practitioners to avoid entering sensitive personal information into generic tools without the explicit and informed consent of the person concerned. Employers should conduct data protection assessments when introducing AI products into their services, it added.

Ethical dilemmas

Issues around data are among a number of ethical dilemmas practitioners are having to wrestle with regarding the introduction of AI, says Dr Tarsem Singh Cooner, associate professor of social work at the University of Birmingham.

Together with colleague Dr Caroline Webb, he is researching the use of AI in social work, including the extent to which it is used and social workers’ understanding of the ethical implications.

The study includes an online survey of qualified social workers who must have used at least one AI tool in their practice.

Singh Cooner believes AI tools have the potential to save social workers time, freeing them up to do more direct work with families.

“I suppose the question is, if these tools can save you time and give you the ability to spend more time with families and children, then why wouldn’t you use them?” he says.

“There are benefits to it as long as you use tools with a critical eye,” he says. “Say you need to write an assessment including a family history and a plan for intervention. You tell ChatGPT or CoPilot you need a seven-point report with these headings. Off it goes and within 30 seconds you get this beautiful assessment.

A command being entered into an AI chat system

Photo: terovesalainen/Adobe Stock

“If you’re lazy, you will copy and paste it, but if you’re not, then you’ll go through the report and adjust it to get rid of any biases and inaccuracies.”

AI bias

To use such tools effectively, he says, social workers need to understand concepts such as “AI bias”, the origin of data used to create algorithms and “hallucinations”, where programmes present misleading or inaccurate information as fact.

“If it’s a Sikh family and the decision-making is based primarily on a Northern American or European dataset, what are the chances there’s going to be bias in the outcome of that decision-making? Probably quite high,” he says.

In its practice guidance, BASW similarly warned that AI tools, particularly generic ones, were prone to replicating racist or sexist assumptions that were inherent in the datasets used to train them.

Other issues raised included the risk that in generating action points from assessments or home visits, AI tools did not take account of key factors, such as the tone of voice or body language of the person being assessed.

As well as advising social workers to continue to take notes and pay attention to non-verbal cues during assessments, BASW urged employers to provide training and clear guidance for staff, while continuously evaluating the AI tools they deployed.

It also called on the government to ‘urgently’ publish legislative and policy frameworks to govern the use of AI in public services, including social work.

Social Work England research into AI

Singh Cooner and Webb’s ongoing study is among a number of pieces of research being carried out into AI and social work.

Social Work England has commissioned a literature review into the issue along with a separate piece of research drawing on the views of social workers, employers and education providers.

A man entering the command 'generative AI' into an AI system with various holographic icons below this

Photo: flyalone/Adobe Stock

The purpose of the regulator’s research is to help it understand more about:

  • the areas of Social Work England’s professional standards which may be affected by social workers’ use of AI;
  • the types of AI being used across social care in England and their application in social work practice, including the risks of bias and discrimination;
  • if social workers feel confident and prepared to use AI ethically and appropriately, in line with Social Work England’s professional standards, and how employers are supporting them to do this;
  • how social work education providers are preparing students for AI in their future work;
  • data protection and confidentiality when using AI with people using services and the public.

In terms of studies of the impact of particular tools, these are thin on the ground, reflecting the newness of generative AI to social work.

Impact of case recording tool

In February, Beam published a report it had commissioned into Magic Notes, its tool which records meetings and generates a transcript, summary and suggested actions based on council-agreed prompts.

This was based on analysis of usage by, and feedback from, 91 staff in three councils in England collected during a trial of the tool last year, along with in-depth interviews with 11 of the social care professionals.

Several interviewees said the tool had significantly reduced the time they spent writing up notes and assessments, allowing for better engagement in meetings with people needing support.

However, concerns were raised over inaccuracies and assumptions in the summaries and scripts, requiring practitioners to make, sometimes time-consuming, edits.

Aside from such evaluations, local authorities have given positive reports of tools they have used.

For example, North Yorkshire council has hailed the impact of Policy Buddy, a tool that it developed with artificial intelligence firm Leading AI, in helping save social workers save time and improve decision making.

The tool, trained on key children’s social care legislation and guidance and the council’s own policy and procedures, enables practitioners to ask questions in order to retrieve information, which is all sourced, to help their decision making.

Lessons from research into machine learning

One area of AI whose use in social work has been researched is the use of machine learning – under which systems analyse data to identify patterns and make predictions -.to help identify children and families in need.

Research results post-it note on mouse

Photo: Artur/Adobe Stock

A 2020 review exploring the use of machine learning in children’s services found there was no evidence it was effective.

Models built by What Works for Children’s Social Care (now Foundations) and trialled over 18 months in four local authority areas failed to identify, on average, four out of every five children at risk.

Meanwhile, where the models flagged a child as being at risk, they were wrong six out of 10 times.

But the pace of innovation means the accuracy and effectiveness of predictive analytics may have improved since.

Government guidance on implementing data systems

In April 2024, the government published guidance on the development and use of data analytic tools – including machine learning – in children’s social care.

This recommends that only councils with advanced data-gathering capacity and technical expertise should attempt to develop and use predictive analytics tools.

“These tools are higher risk, especially when predictions relate to individual children and families,” says the guidance.

The guidance suggests children’s services teams might use predictive analytics to forecast overall demand for services, such as the number of care placements needed, but also to target help at certain children.

It gives the example of a tool that predicts individual children unlikely to be ready for school at age five.

“This can enable scare resources to be targeted at the families who need it the most,” says the document.

The development of any such tool requires robust scrutiny and the involvement and support of all those likely to be affected, including children and families, stresses the guidance.

Munro’s view on predictive tools

However, Eileen Munro, emeritus professor of social work at the London School of Economics and Political Science, is unconvinced about the technology.

Munro, who led a government-commissioned review into child protection in 2010-11, says factors that make life difficult and increase risks to children’s safety, such as poverty, poor housing, lack of healthcare and education, are well-known.

Eileen Munro sat at a conference room, wearing a black blazer and white shirt.

Eileen Munro

“We have all that information without doing anything clever,” she says. “But of those people living in very stressful circumstances, predicting which ones will become problematic is amazingly difficult to do with any accuracy.”

The data that such a tool would make use of is “imbued with prejudice, distortion and inaccuracy”, she adds.

While local authorities can and do use data to identify patterns and predict changes in demand for services, this is done with knowledge of local context.

The idea of using an algorithm to make decisions about the fate of an individual family or child is “inhuman”, says Munro.

“This is a time of massive change, some of which will be brilliant, some of which will be highly dangerous and some of which will just be grubby and immoral – and the profession needs to work out which is which,” she concludes.

, , , ,

No comments yet.

Leave a Reply