A necessary evil

There can be few people involved in the management or delivery
of social services who have not been required to participate in
some form of evaluation. Service reviews, Best Value exercises,
management expectations, funders’ requirements and routine
monitoring – the demand to provide evidence of activity and
effectiveness can be overwhelming.

For many, such demands represent an unwelcome intrusion into the
important job of meeting clients’ needs. As a result, the
evaluation process has often been reduced to a reluctant,
arm’s-length relationship with an external evaluator, hurried data
collection at the end of a project or a cynical number-crunching
exercise. This has tended to give evaluation a bad name among those
who, it could be argued, are best placed to benefit from it.

Yet conscientious practitioners will always be eager to learn how
their practice can be improved; managers have a natural interest in
ensuring services are targeted appropriately; and few would argue
with the right of funders, council tax payers and clients to know
that services are worthwhile and effective.

Another incentive is the power that evaluation wields. In a climate
of budgetary restraint, continuous review and short-term grants, it
is inevitable that evaluation findings will affect decisions on the
future of services. This, perhaps more than anything, is motivating
managers and practitioners to ensure that they have a stake in the
evaluation process.

Although the word evaluation is in everyday use, the term is used
in a variety of ways. At one extreme there is the highly scientific
experimental model typified by controlled trials. At the other end
of the spectrum, we are invariably invited to “fill in the
evaluation sheet” before leaving a training event. Somewhere in
between we find methods linked to performance indicators, targets,
inputs and outputs and cost-effectiveness.

So what is evaluation about and how can it be applied to social
services? The scale and scope may vary but what should be common to
all evaluations is that they consist of a series of systematic
activities directed at answering one or more defined questions. The
design of any evaluation will depend upon several factors: what
these questions are, who needs to know the answers, the context in
which the evaluation occurs and the resources available.

It is often expected that evaluation will be able to deliver
so-called “hard” evidence about outcomes, providing conclusive
answers to questions such as “does this programme work?”.
Unfortunately, such questions are notoriously difficult to answer,
even when agreement can be reached as to what constitutes evidence.
To do so successfully needs planning, a lengthy timescale and an
unusually large evaluation budget.

A rare example of such an ambitious venture is the national
evaluation of the Sure Start programme under way. This is by far
the largest and most expensive evaluation of a social programme in
this country, involving five teams of evaluators, a timescale
spanning several years and a budget of many millions of pounds. The
complex evaluation design covers 260 Sure Start sites, involving
comparison groups, detailed surveys of parents, an exploration of
local contexts and descriptions of services provided. All this is
necessary in order to form a judgement as to whether the Sure Start
programme has “made a difference” to the children in those
areas.

Clearly this is far beyond the means of small-scale services or
local projects with limited resources and a pressing need to
demonstrate their worth. So it will be necessary to settle for less
ambitious evaluation questions, such as “how satisfied are clients
with the service provided?”, “are we reaching our target users?” or
“in what ways could the programme be improved?”. This is simply a
practical and realistic use of limited resources.

For most small-scale evaluations, conclusive proof of effectiveness
is likely to remain elusive. However, any case can be bolstered by
simple measures such as triangulation (collecting information about
the same thing from different sources) as well as by ensuring that
multiple perspectives are represented in the evaluation – for
example, including the views of both staff and service users. If
the same kind of message is coming from all sources, the case for
effectiveness is considerably strengthened.

There is a widespread assumption that in order to be credible,
evaluation needs to deal exclusively with numbers. From government
down, what might be termed the target culture has encouraged those
commissioning evaluation to demand quantitative data. This is all
very well when quantitative outcomes are clearly linked to the
intervention or service in question, and are also readily
measurable. In such circumstances quantitative data can provide an
appropriate and suitably objective measure of change, particularly
when large numbers of cases are involved.

However, identifying relevant quantitative outcome measures is not
always so straightforward. For instance, consider the work of a
family centre. Often this will involve individually tailored work
programmes for families presenting with a range of problems. How
can common outcomes be identified, let alone quantified? While it
may be possible to assess the extent to which individual families
made progress towards negotiated targets, there will inevitably be
a need to gather additional supporting evidence to explain how such
progress was achieved and what the barriers to progress were. This
is where qualitative data come into their own – in providing
explanations of complex situations and detailed accounts of
personal experiences. Such data are subjective, but this can be a
positive attribute in understanding the variety of individual
experience. The use of qualitative data is also a way to ensure the
voices of service users are heard.

It is important that any evaluation is underpinned by a systematic
monitoring process, which collects and records information about
what happened during the project or programme. It may be
appropriate to record information about sources of referral,
characteristics of clients, attendance on a programme or costs
incurred, depending on the nature of the service. This will supply
both vital management information about the running of the
programme and a contextual backdrop to evaluation findings.

What is more, while end users of an evaluation may have a
particular interest in outcomes, such information on its own may be
of limited value. For example, an evaluation may suggest that a
particular project is having a positive effect on its clients. Good
news! It means that others may wish to replicate the service. But
to do so, it is vital that the process of delivery is described and
understood.

Therefore there is a place for both quantitative and qualitative
data, and most evaluation designs will use a combination of the
two. For example, the National Children’s Bureau is in the second
year of a three-year evaluation of a home-school support service
that operates in nine schools in London’s North Islington Education
Action Zone. The service works by drawing up individual contracts
with schools and addressing different issues in each. Our
evaluation aims to assess the effectiveness of the service as a
whole as well as its impact on individual schools and families. But
this needs to be done without placing unacceptable burdens on
either school staff or home-school support workers. This
combination of a varied service, limited evaluation resources and
the need to work sensitively alongside staff poses something of a
challenge for the evaluator.

We resolved this by ensuring that the outcome element of the
evaluation focuses on just one aspect of home-school support in
each school: for instance, attendance and punctuality,
school-parent relations or programmes of support for individual
pupils. Qualitative or quantitative data are being collected as
appropriate: attendance and punctuality will be analysed
quantitatively, whereas we are taking a qualitative approach to
support for pupils by carrying out interviews with children,
teachers and home-school support workers.

A thorough process evaluation was carried out using existing
monitoring systems and supplemented by annual interviews with head
teachers and the home-school support workers. This provided vital
information about the numbers and characteristics of pupils and
families supported, the range of activities undertaken by the
home-school support workers and head teachers’ perceptions of the
service.

Taking the nine schools together, the data collected should enable
us to draw conclusions about the entire range of work undertaken by
services, including the effectiveness of different approaches and
the perceived impact in each school.

Evaluation can legitimately encompass a number of approaches,
include a range of different types of activities, and be used for
several purposes. Ideally, each evaluation needs to be carefully
tailored to the specific circumstances of the project or service
and its immediate context. The important thing is to ensure that
evaluation addresses relevant questions and does so in a systematic
and transparent manner, and involves stakeholders. Much can be done
on a day-to-day basis without entering into a formal process – for
instance, by gathering regular feedback from users and other
stakeholders (and, of course, by analysing and acting upon the
findings).

Evaluation can be described more usefully as a way of thinking,
rather than being defined by the activities it encompasses. It
should be a continuing process integrated into all aspects of work.
Once it is embedded, it will never feel so daunting again.

Catherine Shaw is principal evaluation officer in
the National Children’s Bureau’s research department.

More from Community Care

Comments are closed.