Star or black hole?

Social services
departments have been given star rat 
ings, but will these really make services more accountable to users? Not
necessarily so judging by those applied to NHS acute trusts, writes Tony Cutler
of the University of London.

The star ratings of
social services departments are virtually upon us, part of an inexorable trend
to more performance measurement in public services.

A common rationale
for these exercises is that they promote accountability to service users and
the general public. But will this be the case?

To answer this it is
instructive to look at the last instalment of star ratings, which were those
for NHS acute trusts, in order to explore the lessons for public sector
performance measurement. This is most usefully done by posing several simple
questions that the service user and concerned citizen might ask in their search
for “accountability”, and by considering how far the acute trust ratings
provide relevant responses.

In the NHS, the
ratings, published under the title NHS Performance Ratings Acute Trusts
2000-1
,1 divided trusts into four categories from “No Stars” to
“Three stars”. Twenty-one measures were used, nine of which referred to “key
targets”, which were said to determine whether trusts attained at least a
two-star rating; and the further 12 measures were designed to “refine the
judgement” as to whether these trusts attained three or two stars.

For each measure
there were three categories: targets could be achieved, there could be “some
degree of underachievement” or targets were “significantly underachieved”. The
ratings had “teeth”, for the management of “no star” trusts were expected to
develop action plans. Unsatisfactory plans to “turn around” the trust would
result in new management teams being introduced.

So did this system
enhance accountability? To answer this we should first look at why these
particular measures were chosen. This is fundamental, since it determines what
managers are being held to account for. A visit to the website for the report
will show each measure’s, “rationale for inclusion”. However, these are
frequently narrow. For instance, one key target is the familiar one of reducing
waiting lists – the rationale refers to the 1997 Labour manifesto commitment of
cutting waiting lists by 100,000. However, there is no recognition that waiting
lists are a problematic measure, for in focusing on numbers alone all those
waiting are treated as of equivalent significance, whatever the severity of
their clinical need. Therefore this concept of a rationale is often attenuated
and fails to deal with the debates over key measures.

Second, why are some
measures given greater significance than others? This is also a crucial
question since the so called “key” targets are given greater weight in
determining the ratings. However, this question is not addressed at all, and
the DoH acute ratings report states that the “standard of care” is not being
measured, but rather what it terms “the overall patient experience”. As a
consequence three clinical indicators are not included in the nine key targets,
and this raises the question as to how the “patient experience” can be divorced
from “standards of care”. It also deepens the confusion as to why some targets
are key and others not: for example, the achievement of a “satisfactory
financial position” relative to planned income and expenditure is a key target
whereas reduction in emergency readmissions following discharge from hospital
is not.

Third is the question
of how the measures should be interpreted, or what the measures are supposed to
mean. A good example is provided by the three non-key measures referring to the
filling of staff vacancies regarding consultants, nursing, midwifery and health
visitors, and allied health professions. In each case the rationale is the
same, which is, “to highlight recruitment and retention problems so that areas
with the most challenging problems can be assisted”. This suggests that such
problems are either partially or not at all within the control of management
and reflect variable conditions in local pay and property markets. However,
they also constitute targets and are measures, where failure to achieve will
result in a lower place in the ranking.

Fourth, are the
performance measures related to the context in which the organisations are
working? In the NHS there are major factors which can affect performance,
including the nature of the capital stock deployed or the availability of
complementary services such as residential care. While the report seems to show
an awareness of such issues – for instance by referring to “no star” trusts it
states that while they are poorly performing this does not mean that “staff are
not working hard in often very difficult circumstances” – there is little
detail about such issues and their impact on performance. Consequently, after
the publication of the ratings there was considerable comment regarding the
fact that while none of the 12 no-star trusts were located in the north of
England, half were located in Surrey, Sussex, Kent and Middlesex. Similar
regional differences appeared in the targets for financial performance. It was
thought that this might indicate the existence of a “southern” effect, where
tighter labour markets and other cost drivers affect service provision, but
this issue was not addressed in the ratings.

Another obvious
question is whether the data collected is sufficiently reliable to support the
star ratings. Here, the DoH report states that the ratings are based on data
and information that are “not perfect”, but “the best available”. In one case
(South Warwickshire General Hospitals Trust) the report indicates a
reclassification from three to two stars because of “significant concerns
regarding data quality”. However, there is no systematic discussion of what the
principal data limitations were and how they might have impacted on the ratings.

Lastly, is it clear,
given the evidence presented, how the ratings were arrived at, and are
processes involved used with consistency? Take for example the University
Hospitals Coventry and Warwickshire (UHCW) Trust. Referring to the key targets
this trust under-performed on one key measure, which is a level comparable with
the vast majority of “three star” trusts. It also did not do well on the
supplementary measures, under-performing on seven and significantly
underperforming on two. Given the purported role of these supplementary
measures this might be thought to qualify the trust for a two-star ranking.
However, UHCW was classified in the no-star category and a footnote to the
report indicates that this result was determined by an assessment by the
Commission for Health Improvement (CHI) that the trust “performed poorly across
the elements of clinical governance and lacked strategic capacity to remedy
these weaknesses”.

This suggests a  superordinate status for CHI reports.
However, in the text of the report 10 key targets are identified – the nine key
targets referred to above and, “not receiving a critical report from the
Commission for Health Improvement”. This suggests that the CHI report has the
same status as the other nine measures and raises the spectre of multiple and
conflicting criteria.

The conclusion must
be that members of the public in search of accountability will not be well
served by these star ratings.

Perhaps the central
problem is the difficulty of reconciling the complexity of performance
measurement with the concern for accountability. Current official performance
measurement has not grappled with this issue, arguably because for the
government, references to complexities are regarded as excuses advanced by
managers and professionals for poor performance. Unfortunately, the same
problem is likely to arise now that the social services department ratings are
published.


Key targets

Key targets for
performance ratings of acute health trusts 2001:

– Shorter inpatient
waiting lists

– No patients waiting
more than 18 months for inpatient treatment

– Reduction in
outpatient waiting

– Fewer patients
waiting on trolleys for more than 12 hours

– Less than 1 per
cent of operations cancelled on the day

– No patients with
suspected breast cancer waiting more than two weeks to be seen in hospital

– Commitment to improving
the working lives of staff

– Hospital cleanliness

– A satisfactory
financial position

– Not receiving a
critical report from the Commission for Health Improvement (CHI)

Dr Tony Cutler is course director of the MSc in health
services management, school of management, Royal Holloway, University of London.

1
Department of Health, NHS Performance Ratings Acute Trusts 2000 – 2001,
DoH, see website
www.doh.gov.uk/performanceratings/performance.pdf

More from Community Care

Comments are closed.