Ruth Puttick - 20.05.2011
Evidence-based decision making and rigorous evaluation of social policy is vital to developing radical, innovative solutions to the problems facing society today.
Evidence-based policy making sounds like a complete no-brainer. Why would decisions be made and money invested with no evidence of impact or no effort to test programmes or interventions once they have been implemented? Yet it is increasingly recognised that - beyond health - a small amount is tested to see that it works, or arguably more importantly, that it doesn't. For instance, out of 70 programmes implemented by Department for Education, only two or three had been robustly evaluated.
At a event last week, Michael Little from Dartington Social Research Unit said we should strive for 5% of children's' services to be evidence-based. If 5% is a realistic target, how low must the prevalence of evidence-based programmes be now? How many programmes or policies are a waste of money, demonstrating little or no impact, or worse still, are actually damaging?
There has been useful debate this week around the use of experiments in evaluating social policy, especially randomised control trials (RCTs). The fact that social problems are deemed "complex" should not make them exempt from rigorous evaluation, as Ben Goldacre noted, a number of complex problems have been solved when "someone ran a trial and found the answer". At NESTA we have also found a number of organisations that use a range of methodologies - including but not limited to RCTs - to effectively test what does and doesn't work across areas of social policy from criminal justice to international development. At an event arranged by J-PAL earlier this week, the message was not whether we should be doing experiments, but how can we do more and how can we do them better.
Yet generating evidence is only one piece of the puzzle. Too often where evidence does exist it can be poorly designed, findings can be toned down, or worse still, ignored entirely. The US' Scared Straight is a good example of this. Scared Straight involves young people visiting prisons and talking to inmates, with the experience supposed to prompt them to think-twice about offending. This may sound sensible, but rigorous evaluation shows that it is not only ineffective but that is actually damaging to the young people involved. Despite this evidence, Scared Straight remains in use worldwide.
The recent calls for more and better evaluations - especially randomised experiments and trials - is welcomed, yet the generation of evidence is not the end result in itself. We need to explore how we can also improve the wider decision-making system so that research is translated into practice, for instance when ineffective programmes and interventions are revealed can they be stopped and decommissioned? How can we ensure that by setting the bar for evaluations meaningfully high that we don't create insurmountable barriers for providers developing (potentially) radical, innovative solutions? What incentives are there for adopting proven programmes? Indeed, do practitioners always have the skills and organisational culture to implement proven practice with fidelity to the original programme model?
We will be working over the coming months to address these challenges and help to improve and strengthen the evidence base for across the different areas of social policy and practice. As always, we would welcome your thoughts.
If you have any questions, ideas or experience you'd like to bring to this project please get in touch with Ruth Puttick in the Policy and Research team (firstname.lastname@example.org).
Different, better and lower cost public services.
Download the report
Perspectives on how research and evidence can influence decision making in public services.
Download the report