The shortfalls of IT in children’s services

As the government sets up a taskforce to examine the work of practitioners, professor Sue White and colleagues share their study on the IT challenge faced by social workers in child protection

In recent years, practice at the “front door” of statutory children’s services has been significantly modernised. The government’s Framework for the Assessment of Children in Need and their Families clearly defined the initial response as a distinct stage in the assessment process and introduced tight timescales and a standardised Initial Assessment Record.

Since then the Integrated Children’s System has been introduced so that decisions and actions are digitally managed according to national standards. In our study we have examined how the human and technological parts of this process interact in practice.

Systemic weakness

The project draws on a systems perspective, in which errors in human systems have less to do with the perverse actions of individual workers and more to do with “latent conditions” for error that arise from general systemic weaknesses. From this perspective, approaches to error management that focus on individual breaches of procedure or human error will inevitably be limited in delivering safer worker practices. Elements of any system – for example, user, machine, form, procedure – combine to create the conditions for error.

It is estimated that, on average, some 300 referrals are made to each statutory duty and assessment team every month. While not all contacts convert to referrals, contacts must still be filtered to identify cases that require an initial assessment.

Most duty and assessment teams participating in our study consistently stated that they received a high volume of contacts and referrals and that the difficulties of responding to these referrals were worsened by the tight timescales for initial assessment. Busy teams managed their workflow with well established “general deflection strategies” including: strategic deferment, which involved sending the referral back to the referrer to ask for more information, and signposting, deflecting the case to a more “appropriate” agency.

The risks are clear. As the senior practitioner in the following extract indicates, in stretched teams initial filtering is not always based on clear thresholds. Rather, cases are inevitably considered in the context of those already in the system:

“We get e-mails from managers saying not to make any more allocations… the team leader has been NFA-ing [no further action] a lot and trying to reduce contacts because we are told there are too many in the system… the team leader said to deal with all referrals as contacts unless they need allocating… ” .

Decision-making is clearly fallible when time is short and decisions are simply made on the basis of surface data such as age of the child or source of the referral. In some busy teams, we noted the routine categorisation of anonymous referrals as malicious (referrals from neighbours and family members were also often treated as suspect).

We were also told that children aged 13, 14 and 15 were routinely “NFA-ed”, that is, an outcome of no further action was recorded, on the basis that these children and young people “must have lived with these concerns for a long time and be quite resilient”. These shortcuts are an indication of teams struggling to respond to referrals.

Electronic systems that aimed to improve the quality of practice often added to the difficulties of meeting timescales and targets. For example, where an on-site customer care officer had been replaced with a centralised customer service centre designed to standardise the initial response, this was consistently reported as compounding problems of workload as more contacts were logged as referrals.

Before the centralisation of customer services, less relevant contacts would have been dealt with effectively over the phone by a customer care officer liaising with the duty social worker.

Practice mismatch

New e-pathways offered little flexibility for workers and often organised practice in ways that conflicted with preferred ways of working. One team manager spoke to us about the mismatch between locally devised effective methods of organising incoming work and requirements imposed by the new system. Incoming work that would previously have been divided between the manager and her co-managers now flows into a single electronic box causing confusion about who is doing what.

It was not just the initial filtering of contacts and referrals that was a problem. When it came to an initial assessment, the seven-day timescale for its completion was widely reported as too short, particularly given problems like school holidays, worker sickness and families not being at home. As this team manager describes, this leads to superficial and often “risky” assessments:

“There’s no doubt about it – it’s very fast, you are in and out, because there’s an expectation that within the seven days you have it outcomed and have written up the [initial assessment]my issue really with the seven days is that it doesn’t allow time to complete assessments the majority are done after one contact with the family you are making a judgement on a snapshot on that family and there is a very real element of risk in that”.

While we might berate the worker who does a superficial assessment or closes a case on the basis of scant information, practitioners are well aware of the sanctions for councils that fail to comply with targets.

“we do push ourselves and work in excess of the hours in the week and everybody does… but if you don’t get access to the house, or the parents refuse for you to see the child, then you exceed the seven days and there’s nowhere [the system] actually asks why. There is no ‘free text’ box so you can type: “we missed this because we did several home visits and got no access” -that’s not recordable. It doesn’t inform the statistics anywhere and that would follow through to say “Authority Z didn’t complete an assessment in time”‘ (team manager).

At every stage of the initial assessment process, workers described tensions between the performance elements of the system and the imperative to safeguard children and support families. Most practitioners welcomed the general principle of electronic recording, but said the Initial Assessment Record was not only overly long, but the standardised questions and sub-headings were poorly adapted to individual cases. They often seemed irrelevant to the presenting issues, so workers routinely missed out large sections.

We also observed safer locally improvised methods for meeting timescales that generally amounted to holding a case open for review, but logging the initial assessment as complete on the system so as to meet the target. In cases where seven days was inadequate to establish confidence about the child’s welfare, this review space enabled further information to be gathered. However, such “workarounds”, even when they are driven by a desire to protect good practice, can by their very nature only survive while they remain undetected by the inspection agencies.

Design faults

It is clear that there are many design faults at the front line of children’s services that undermine the work of motivated professionals. Lack of flexibility in the system appears to have curtailed professional discretion, such that attempts to work around the system can sometimes be defensive as well as innovative. Whether these short-cuts take the form of early categorisations based on incomplete information, or the selective completion of the Initial Assessment Record, all are attempts to cope with a system presenting competing demands that are difficult to reconcile.

These findings challenge the huge investment in systems of performance management and IT, but we are not arguing for a wholesale abandonment of new technology. Rather, the implications for practice are that the design of any system needs to be based on a thorough understanding of the needs of staff and their working practices. Failure to effectively involve team managers and social workers in the development of systems creates the latent conditions for error.

We believe that new systems and technologies can be developed which both assist practitioners and achieve desired organisational goals, but this cannot be done without a properly informed understanding of everyday practice. There were unmistakable signs of practitioner disquiet, variously relating, among other things, to additional workload, excessive bureaucracy, and the usability of forms, in pilot studies of the assessment framework. Further troubles have also been found in more recent studies of the ICS. It is regrettable that such early warning signals were seen as implementation issues rather than fundamental problems of systems design.

Professor Sue White and Dr Karen Broadhurst, Lancaster University, and Professor David Wastell, Nottingham University




More from Community Care

Comments are closed.