Alliance for Useful Evidence

Day 7: Making the debate relevant

18.10.2011

Not everybody thinks that evidence is the most important thing in the world. But most would recognise that knowing whether a programme of intervention is going to be harmful to them, their family or friends, is a big deal.

At an event earlier in the year, Michael Little from Dartington Social Research Unit said we should strive for 5 per cent of UK's children's services to be evidence-based. If 5 per cent is a realistic target, how low must the prevalence of evidence-based programmes be now? How many programmes or policies are a waste of money, demonstrating little or no impact, or worse still, are actually damaging to the children receiving them? Are the parents and the general public aware of this?

Parents are quite rightly concerned about the drugs and healthcare their children receive. This is clearly demonstrated in the media by the controversy surrounding the prescription of anti-depressants to adolescents or vaccines such as the MMR, for instance. Yet do we have the same level of scrutiny in other areas of our public services? For instance, we may not wish to receive an untested and unknown type of medicine but are we willing to receive a dosage of a treatment programme in another area, such as social care or education, that is yet untested? Indeed, even when certain interventions, such as the education programmes Scared Straight or DARE have been tested and are shown to be ineffective, they remain in use worldwide. How can we ensure the evidence generated in areas beyond health is more accessible and useable so people can make more informed judgements about the services that they or their family receive?

The voice of service users is gaining strength, with people able to influence the decisions that affect their lives, accessing information to decide upon what they deem to be the best types of treatment. Take health for instance. There are numerous examples of public campaigning to the National Institute for Health and Clinical Excellence (NICE) when a treatment that patients, their relatives and campaign groups believe to be effective is being withheld. Is there access to such information to enable people to make informed decisions and judgements in other areas of public services? Or do people believe that the supposed lack of evidence is an indication that services are effective, rather than that they may actually have not been tested at all?

We have noted previously that the public are a key ally in advancing the evidence agenda, with a lead role to play in helping demand better evidence to underpin the decision making in public services. If we are to successfully stimulate demand for rigorous evidence on effectiveness, then we need to ensure that we don't cut the public out, ensuring they are able to decide what are the best quality programmes and treatments for them and their families. This means ensuring the debate and discussion is relevant to what people want: which is positive, impactful public services. This means we need to talk in terms of improving quality of life, which is what driving the evidence agenda is all about.

 

Filter Blog Entries

Archive

Subscribe

Click here to subscribe to the Alliance for Useful Evidence

Ten Steps to Transform the Use of Evidence

1. Moving beyond discussing evidence based

2. Enabling evidence and innovation to co-exist

3. Debunking the myths about Randomised Control Trials (RCTs)

4. Institutionalising the demand for evidence

5. Dealing with negative findings

6. Managing the politics of decision making

7. Making the debate relevant

8. Opening up data for better, cheaper evidence and the new army of armchair evaluators

9. Evidence in the real world

10. Developing a UK Alliance for Useful Evidence

Tens Steps to Transform the Use of Evidence blog series

Ten Steps to Transform the Use of Evidence blogs [original]Download all ten Evidence blogs in one PDF document

Add your comment

In order to post a comment you need to
be registered and signed in.

sclarke
19 Oct 11, 1:38pm (1 years ago)

Effective Interventions

I read your series of blogs with interest as a scientist turned statistician now working in a more central strategic role to improve the effectiveness of our intervention approaches.

Much of what is being said is sensible but well known and there are many grey areas and less clear cut examples. Also much of your talk is about large scale intervention approaches that are are not always the norm.

I would like to set out a selection of other issues around effectiveness and then an outline approach to demonstrating effectiveness of adopted approaches.

Selected other issues:
Latency – there is a huge latency period between gathering robust scientifically evidence of effect and implementation.
Inertia or stagnation – similar to above argument and the concept of the perfect being the enermy of the good. What level of evidence support is enough if we wait for better evidence we do nothing or maintain status quo.
Context – Intervention effectiveness is context specific and the world is dynamic something may be effective or ineffective in one place but not in others (ref realistic evaluation lit)
Policy versus personal failure – these two are always confounded and we need to move to a culture that disentangles these concepts. Policy or intervention failure is only linked to personal/project team failure if the failure is in the process of policy/intervention selection
being non-evidenced based and opaque.
What is effective – often interventions and policies have objectives that pull in opposite directions or even hidden objectives for example a policy may be publically popular or provide public reassurance but not deliver the stated objectives. In terms of trnsparaency of objectives public reassaurance or perception of improvements in X may be as important as actual improvement in X.
There are many others.

In terms of demonstrating effectiveness I would suggest that there is clarity and complete transparency of what the problem is and what needs to change (clear strategy). Then that right range of people/experts are involved in exploring alternatives coming to a accommodation of how and where to address this in a transparent and fully documented way. This should consider data/evidence on targeting, what has worked or what may work, e.g behaviour economic principles, cost-benefits, and analyses of the system involved to identify points of influence and unintended consequences. Once combined with proportionate monitoring and feedback akin to action research that is responsive in terms of modifying or stopping actions then I would suggest the plan interventions that arise are the most effective that could be developed at the time.