Star wars

Performance ratings for social services
departments are set to be more transparent than ever thanks to Alan
Milburn’s proposals for star-ratings, and could see epartments
being pitted against each other in the battle for resources.
Frances Rickford reports.

Remember dunces? Teachers made them stand in
the corner with a tall hat marked with a D for getting their sums
or spellings wrong – at least, according to the picture books I
used to read. Now we don’t do that to children. Humiliating or
punishing people because their performance doesn’t reach our
expectations has gone out of fashion. It’s thought to be
demoralising and demotivating – counter-productive, in fact. But
doing it to organisations – particularly public sector
organisations – is all the rage. Not only do league tables of
schools, education authorities, hospitals and now social services
departments reveal to the world who is top and bottom of the class,
but the government is now proposing to heap more rewards on the
glorified and more penalties on the shamed.

A new star ratings system will award zero,
one, two or three stars to social services departments depending on
a formula bringing together their performance indicator scores with
data from Best Value, joint reviews and Social Services
Inspectorate reports. Social services departments which score
highly will have extra freedoms and extra money – their share of a
£50m “performance fund” – and those deemed poor performers
face a battery of possible interventions including the indignity of
being taken over by a neighbouring authority. Not surprisingly
social service managers are frantically asking themselves, in the
words of the gospel song, “Who will wear the starry crown? Oh Lord,
show me the way.”

Thirty departments got a foretaste last autumn
at the social services conference when Alan Milburn jumped the star
ratings gun by naming and praising or shaming departments on the
basis of performance indicators alone. This drew protests from the
Association of Directors of Social Services who pointed out that
the Department of Health has itself said that “the indicators can
only paint part of the picture and must be considered as part of a
broader set of performance information about social services” and
that “indicators only indicate; assessing the performance of a
council is complicated…”1

The rationale for performance assessment,
according to Milburn, is that “the publicÉ have a right to
know how well those services are doing in comparison with

But his argument begs a string of questions.
First, and most obvious, who decides what a good service is?

Second, can something as complex as a social
service really be quantified, or even graded by external inspection

Third, if services can be measured by
indicators, are the current performance indicators the right

Fourth, are they robust enough to resist
manipulation, and are social services departments’ data collection
systems thorough and consistent enough to ensure like is being
compared with like?

Fifth, is the assessment framework measuring
things which social services managers can control, because if not
it is both unfair and unlikely to improve anything.

And sixth, can a regime as top-down as the
performance assessment framework invigorate and improve services
which rely fundamentally on the skill, energy and goodwill of
front-line staff?

Tony Hunter is director of social services in
the East Riding of Yorkshire and chairperson of the ADSS’s
standards and performance committee. Although furious about
Milburn’s own performance at Harrogate, he has no argument with the
principle of performance assessment. “We do support the development
of systems which enable the public to have an overall perception of
how a council is performing. That system should measure what really
matters to users and carers most, but you’ve got to start
somewhere. And a system which is useful overall is less likely to
be in place if social services directors are not involved in
developing it.”

But Hunter is acutely aware that the current
indicators are flawed, and is also conscious that front-line staff
in many departments need a lot of convincing that the system is
really about improving services for users. Front-line scepticism
about the usefulness of many of the indicators is endorsed by the
detailed work undertaken by the Social Services Research Group
(SSRG) which has published a 120-page, two-volume study of the
personal social services performance assessment framework and Audit
Commission performance indicators for social services. The SSRG is
an independent organisation with members from within local
government, health, and academia. It was actively involved in
developing the original PAF indicators four years ago, and still
works closely with the Department of Health. Last November, SSRG
chairperson David Allen wrote to social services chief inspector
Denise Platt to express the group’s concern about the current
direction and development of performance management of social
services. It warns that a number of the PAF indicators are seen by
the professional community as “weak or very questionable” and
expresses fears that things are set to get worse rather than
better. “Effort and resource in the performance management arena is
becoming increasingly fragmented and dispersed, and decisions about
future performance indicators are being made that are not being
informed by good practices on the ground.”

The question of who decides what is to be
measured is also highlighted by Peter Beresford, professor of
social policy at Brunel University and chairperson of the user
organisation network Shaping Our Lives. “On whose views are you
setting the criteria? What things are seen as important, what
weights get attached to different things, and who rates them? We
already know that there are significant differences between the
perceptions of managers and practitioners on the one hand and of
service users on the other about what works and what constitutes
good practice.”

Beresford points out that service-user
organisations are most concerned about the nature of interactions
between themselves and practitioners, the culture and values of an
organisation, and the issue of continuing cuts to services.

“You need to have safeguards, but the
evaluation has to be right. Doing it this way invites organisations
to cover their backs – to look for ways to subvert or manipulate
these bureaucratic standards and measures because if they don’t
they will be penalised.”

Ian Sinclair heads the social work research
and development unit at the University of York. He argues that the
performance assessment framework could have a positive influence in
acting as a counterbalance to social work’s inevitable focus on
individuals. “Performance indicators measure the general so they
could spread people’s focus more widely, which would be a good
thing. But there are many dangers including the risk that that they
will be allowed to interfere with professional judgment.” So for
example a child could be left in a placement in which they are
unhappy because the department is trying to improve its score on
placement changes.

Sinclair points out that the indicators are
not risk-adjusted, so for example the larger number of difficult
children your department is working with the worse your score is
likely to be. And he also suggests that many of the indicators
measure things which are largely outside of the control of the
directors whose heads will be on the block.

“For example, the indicator on long-term
foster placements depends on the quality of your foster carers, and
it is extremely difficult to influence that. They are supposed to
train foster carers but there is little or no evidence that
training does in fact improve the quality of foster carers.”

Even the quality of directly employed staff,
such as the heads of children’s homes, is very difficult for senior
managers to influence especially when, as now, there is a labour
shortage across the social care sector.

Other indicators measure factors which are
even further outside the influence of social service managers.
“They depend on things like the supply of educational psychologists
or the actions of the courts, or the performance of the health
service or local schools for example.

“The danger is that the star ratings system
will punish organisations for things beyond their control and
because they get a bad smell from their poor rating the situation
will be exacerbated. If this is combined with giving more resources
to the successful, it will be an extremely dangerous policy

1 Department of Health,
Introduction to Social Services Performance Assessment
Framework Indicators
, DoH, 2001


More from Community Care

Comments are closed.