First practice guidance for AI in social work warns of bias and data privacy risks

BASW calls for employers to provide clear guidance and training on the use of generative artificial intelligence and its implications for social work practice

Photo: Supatman/Adobe Stock

This article is part of our new ‘Future of Social Work‘ series, where we’ll be reporting on innovative practice approaches and technology in social work. Get in contact with us to flag up anything that you think ticks either of those boxes at anastasia.koutsounia@markallengroup.com

The first practice guidance in England on the use of generative artificial intelligence (AI) in social work has urged caution, highlighting risks around bias and data privacy.

The document, produced by the British Association for Social Workers (BASW), outlined various ethical and practical challenges surrounding AI use, including biased outputs, breaches of data privacy and the generation of misleading or incorrect information.

It emphasised that generative AI – where artificial intelligence is used to create new content based on prompts it has received – should be used to create capacity for relationship-based practice, rather than justify increased caseloads or redundancies.

BASW called on regulators and employers to issue clear guidance for practitioners and for the government to ‘urgently’ publish legislative and policy frameworks to govern the use of AI in public services.

Alongside the guide, it released a statement setting out its position on the use of generative AI in social work.

Guidance designed to help social workers new to AI

Its intervention follows an increasing number of local authorities adopting or testing AI tools to help social workers save time on recording case notes, with some systems also suggesting actions for practitioners to take following assessments and visits.

Keir Starmer has touted the technology’s potential to save social workers time, as part of a government push to roll out the use of AI in the public sector, while Social Work England has launched research into the impact of AI on the profession.

BASW’s head of policy and research, Luke Geoghegan, said the guide aimed to support practitioners who were being introduced to AI but had little knowledge of it or its implications for their practice.

“Social workers will be coming into contact with this stuff even if their employer hasn’t introduced it because they’ll be Googling resources or questions and there’s no uniform or standard approach,” he added. Different local authorities are at very different stages [of using it].”

‘Hallucinations’, bias, and lack of context

Luke Geoghegan, head of policy and research at BASW

The guidance warned that AI tools, particularly generic ones, were prone to replicating racist or sexist assumptions that were inherent in the datasets used to train them.

Other potential issues included spelling and factual errors, struggling to understand context, and ‘hallucinating’ – presenting inaccurate or misleading information as fact.

As a safeguard for accuracy, BASW recommended that practitioners continued to take notes during assessments and pay attention to non-verbal cues.

Geoghegan advised that the degree of professional scrutiny had to be “proportionate to the task”.

“The more complicated the recommendation [generated] from the discussion, the more problematic that that becomes,” he added. “AI doesn’t work in context; it doesn’t take into account the wider picture of what could or should be done. It misses tone of voice, humour, body language.

“As social workers, we need to critically assess all the information coming to us and make a judgment. I think the danger is that in the hustle and bustle, when time is short, it’s too easy to accept the answer given rather than question it.”

Privacy was another concern, given the risk of personal data entered into an AI tool becoming part of the datasets on which it was trained. This was particularly relevant when using generic models, such as Chat GPT, said the association.

BASW advised practitioners to avoid entering sensitive personal information into generic tools without the explicit and informed consent of the person concerned. Data protection assessments should also be undertaken before introducing AI products within services, it said.

Employers should provide ‘clear guidance’

It also urged employers to provide training and clear guidance for staff and continuously evaluate the AI tools they deployed, with a senior practice lead appointed to ensure the ethical use of AI within the service.

“Certainly the regulators are starting on this, but there needs to be clear guidance about what is an appropriate use [of AI],” said Geoghegan, who urged employers to provide appropriate support to staff when introducing AI tools.

However, BASW warned that accountability ultimately sat with the social worker using the tool – with liability increasing if AI were used through a personal device without an employer’s permission.

‘Lack of evidence to back AI’s effectiveness’ 

While AI had the potential to “enhance productivity”, the professional body said there was currently limited evidence demonstrating its effectiveness or appropriateness in social work.

“Increased use of generative AI potentially creates risks for the protection of human rights and the promotion of wellbeing. It also has the potential to lead to greater injustices and greater inequality,” it added.

“These are issues of concern for a profession grounded in the protection and promotion of human rights and committed to tackling social injustice and inequalities.”

BASW also cautioned against overestimating AI tools’ ability to save time, as checking and correcting the generated text may prove more time-consuming than initially realised.

Investing in inappropriate products

The growing market for AI products has seen companies now venturing into the social work sector, with dedicated products such as Beam’s Magic Notes and Microsoft’s CoPilot being tested by various local authorities.

However, BASW warned that a lack of understanding of AI may lead to “investment in inappropriate or unsuitable products”, as companies may “downplay” their product’s ethical and practical challenges when marketing them.

Geoghegan said: “My concern would be that a supplier comes along and says, ‘Here’s a great piece of software it’ll make life so much easier for you,’ and then people buy and adopt that software without understanding the risks.

“The public sector is littered with IT projects that didn’t quite work out as was promised. As with any procurement exercise, you need to think very carefully about the benefits and the risks of adopting a particular project.”

‘The jury is still out on AI’

While some BASW members had expressed a “degree of anxiety” about the rising use of AI, others saw it as a ‘silver bullet’, said Geoghegan. His advice was for the sector to aim for a happy medium.

“We often think that, whatever the new technology is, it’s going to fix things. My personal view is what usually happens is that we come in somewhere between those two extremes [of success and failure].”

He said “the jury was still out” on the effectiveness of AI within the sector, although the potential was clear.

“This is a new wave of technology and, like with any new form, we’ve got to critically assess whether it works and is appropriate [to use], realise the inherent problems and use the bits that work to help us move things along.”

Celebrate those who’ve inspired you

Photo by Daniel Laflor/peopleimages.com/ AdobeStock

Do you have a colleague, mentor or social work figure whom you can’t help but gush about?

Our My Brilliant Colleague series invites you to celebrate anyone within social work who has inspired you – whether current or former colleagues, managers, students, lecturers, mentors or prominent past or present sector figures whom you have admired from afar.

Nominate your colleague or social work inspiration by filling in our nominations form with a few paragraphs (100-250 words) explaining how and why the person has inspired you.

*Please note that, despite the need to provide your name and role, you or the nominee can be anonymous in the published entry*

If you have any questions, email our community journalist, Anastasia Koutsounia, at anastasia.koutsounia@markallengroup.com

, , ,

4 Responses to First practice guidance for AI in social work warns of bias and data privacy risks

  1. Roisin April 14, 2025 at 4:15 pm #

    How about employ more staff, pay a fair wage for 2025 and reduce caseloads. AI could then be a luxury tool and social workers wouldn’t need short cuts to manage the administration elements of the job trying to meet the needs of the LA. Sorry I meant the needs of the child.

  2. Tilly Baker April 15, 2025 at 5:33 pm #

    And a progressive threat to profession as a whole, taking the human out of human services only leads to unemployment.

    • MaxP April 18, 2025 at 2:38 pm #

      Unfortunately, adult social work at least, is almost certainly going to become a victim of Ai. General adult social work completing assessments for POC will be first to be made redundant. Those requiring support to remain living in the community will access POC via online/telephone in similar ways as people access welfare benefits. Younger social workers would be advised to look at retraining .. in 10yrs time there will not be a need for social workers in adult services.

  3. Jelte April 17, 2025 at 10:44 am #

    I think social work is very susceptible to some of the major downsides of genAI: biases, generalizations, lack of context-awareness, privacy. I do think there are opportunities though, but they should be designed with social workers and the target groups, not top-down (though support from the top should be there). If it’s about automating administrative tasks: should you automate it or should you reconsider if the administrative tasks are necessary for the job?

Leave a Reply