No service user participation without proper evaluation

Involving people who use services in choices about the way that those services are delivered has become a mainstream part of social care practice and the past decade has seen “participation” become a key target in many service-level agreements.

This move towards a working culture of participation has been widely welcomed by many service user-led groups that have campaigned for services to meaningfully involve people at every stage of their planning and delivery.

What is not widely known, however, is the difference that increased participation is making and what measures services are taking to evaluate their participation practices.

There is little evidence on the evaluation of service user and carer participation and commentators suggest that more research is needed to ensure that participation really does lead to better outcomes for people.

There is a concern that some organisations involve people in specific activities to tick the participation “box”, rather than show evidence of change or improvement as a result of involvement. In a survey of practice across the UK, the Social Care Institute for Excellence (Scie) found that most social care organisations do not routinely evaluate participation.

The reasons for the gap between participation levels and evaluative activity can partly be explained by some of the barriers to evaluating participation that are identified by people who use services and those who work in, manage and commission them.

These barriers include power differences between professionals and service users that can make honest evaluations difficult to achieve, fear of negative results, a lack of training, limited resources and difficulty in defining whether changes are due to participation or other factors. Expectations of what will be evaluated can also be unclear some organisations focus on the process of participation while others look at whether the outcomes for people have improved.

Research into evaluating participation has found that there is not a shared understanding of what “difference” to measure. Participation might be about service users having a voice (being listened to), about having a choice (more control over what the services they receive) or about making changes to the services as a whole.

How people feel about the way they participated can be as important as the results of their participation. What is clear from research is that evaluation takes time, commitment, skills, resources and systematic planning, and if any of these are not available it is likely to prevent the evaluation from happening or from being successful. Evaluations need to be planned from the beginning, and costed into any proposals.

Although it is not possible to conclude which methods of evaluation are best for what kind of participation, evidence has led to 20 key questions that organisations should consider before they evaluate their participation practice.

If individuals and organisations ask themselves these questions and address these pointers, they will be helped to develop the most fitting approach to evaluating the difference that participation is, or is not, making. With responses to these questions, individuals, groups and organisations will be better equipped to develop measures to evaluate the effectiveness of service user and carer participation.

Further information

Position Paper 9: Developing effective measures for service user and carer participation.

Resource Guide 7: Participation: finding out what difference it makes.

Position Paper 3: Has service user participation made a difference to social care services?

Position Paper 7. Common aims: a strategy to support service user involvement in social work education.




 

 

More from Community Care

Comments are closed.