

The use of artificial intelligence (AI) in social work appears to have increased among social workers over the past year, a poll has found.
This follows a rising number of councils piloting or deploying AI tools, such as Beam’s Magic Notes or Microsoft’s Copilot, in their services, to assist practitioners with administrative tasks, such as making case notes from assessments or visits.
Of 861 practitioners who responded to a recent Community Care poll, 30% said they were using generic or dedicated AI tools in their daily work, compared to 21% of 713 respondents to a similar poll carried out in September 2024.
Learn more about AI and social work
To explore what we know and equip practitioners, educators, leaders and policymakers to manage the opportunities and risks that AI presents, Community Care has gathered a group of experts for a half-day online event taking place on 9 July 2025.
Sessions will cover social workers’ experience of artificial intelligence, the role of AI in adults’ and children’s services, its use in social work education and how practitioners should manage the ethical dilemmas the technology presents.
‘AI makes us more human’

Keir Starmer meets Ealing council staff and Beam chief executive Alex Stephany at an event on AI at Downing Street (photo: Simon Dawson / No 10 Downing Street)
The apparent rise comes amid growing political endorsement for the use of AI in public services. Speaking at the 2025 London Tech Week, prime minister Keir Starmer declared that AI and tech “make us more human”, referencing a conversation with a social worker who said artificial intelligence had significantly reduced her paperwork and caseload.
“[She said] this was helping her transform her work, because she could concentrate on the human element of it,” said Starmer.
Sector concerns about privacy and bias
However, social work researchers and the British Association of Social Workers (BASW) have raised concerns around data protection, bias and AI tools’ tendency to “hallucinate”, present misleading or inaccurate information as fact.
This was particularly noted when practitioners used generic tools – those that were not designed to be used within the sector – for notetaking and transcribing.
Amanda Taylor-Beswick, professor of digital and social sciences at the University of Cumbria, has highlighted the need for fit-for-purpose products designed to handle sensitive information.
And in recently produced practice guidance on AI, BASW advised social workers to take their own notes and pay attention to non-verbal cues while using AI tools, and also called on employers to provide training and clear guidance for their staff.
‘AI should act as another tool in the kit bag’
In the comments of a related article, many practitioners acknowledged the benefits of AI for administrative tasks, but some warned it must not become a replacement for human judgment.
Shelley Bowyer said AI should only be used for staff to “catch their breath” and spend more time with families.
“AI should only be considered one of the many tools in a social worker’s toolkit,” she said.
“It’s there to help them gather the evidence [and] free them up from some of the intensive note taking. [That way they can] actively listen to those they are supporting, make eye contact, observe their surroundings and take in all extra details.”
She also cautioned that there should be policies in place to ensure that professional judgment remained at the forefront of social workers’ decision making.
“There should be a clear quality assurance framework to support staff,” she added. “It also needs to be recognised that it is not there to replace staff, but allow them to develop their social work skills.”
She was echoed by Keirron Goffe, a social worker with over 20 years’ experience in the sector, half of which has been spent working on technology.
‘Always sense-check what the tool returns’
He warned of AI’s vulnerability to errors, using an example of a tool changing someone’s quote from “the family is now a risk” to “the family is not a risk”.
“That single letter changes the entire meaning and could have serious safeguarding implications,” he added. “Such errors can arise from simple things like a pause in speaking, your accent, an internet glitch or a simple mishearing.
“Just as you wouldn’t take everything a stranger says at face value, you should always sense-check what these tools return. Without careful review, critical decisions could be based on inaccurate information.”
He suggested “rigorous checking and training” to ensure responsible use of AI tools and “maintain trust in this sensitive sector”.
‘AI will be used to bully and oppress workers’
Charlie expressed fears that AI may be used to monitor staff, comparing its introduction to that of a case management system in the authority he then worked for, which was then used to monitor practitioners’ work, despite being sold as a way of easing their admin burden.
“I hated Monday and Tuesday, when the data was refreshing and we used to get at least 10 snotty emails telling us we had failed the whole world,” he said.
“[AI won’t be] used to help workers but to further bully and oppress them. From my experience, no one above the workers cares about the people. To the average manager and above, people are statistics and numbers [that] need to be “got off the desktop”.”
‘What we really need is adequate funding’
Another social worker criticised AI being cited as a solution to practitioners’ workloads while underlying issues, such as bureaucracy and underfunding, remained unaddressed.
“[They say] AI will save us paperwork time. Never mind the 1,000 forms they have us fill in and then the two panels we have to go through to get a package of care authorised,” they said.
“I am sick of every single ‘tool’ imaginable being touted as the answer to our problems when what we really need is adequate staffing and funding.”
No comments yet.