Day 1: Moving beyond discussing 'evidence based'

Since the 1990s the term 'evidence based' has become a central part of public policy discourse in the UK. Yet despite the term becoming common parlance we lack an agreed understanding of what it actually means. What evidence do we need in order to know that a programme 'works'? Who does it work for? When and in what situations?

What 'evidence' means varies across different areas of social policy and practice. It can exist in many forms, from the outcomes of randomised control trials, to autobiographical materials like diaries, to ethnography, with many more beside, with different methodologies and techniques being used at different stages.  This creates different interpretations of what the 'truth' is.  What is meant by 'evidence based' is complicated further by the fact that the impact of programmes and policies can be transient, changing over time, situation and context.

Many organisations use terms like 'top tier', or 'promising or 'model' to classify programmes and help decision makers select those interventions or approaches that are deemed 'to work'. These commonly draw upon studies where the intervention has been evaluated using random assignment to signal to decision makers which programmes are backed by 'strong evidence'. This is fine in theory but what about those situations where these methods are not appropriate? Maybe the intervention is at an early stage of development, or is localised and involves a small sample size? How do we then judge and compare alternative types of evidence? When can we say a programme or policy is 'evidence based'?

Or should we be looking at this from another angle? Instead of thinking about what 'evidence based' is, we could usefully turn the debate on its head and instead think about what it is not. For instance, we know that it should not simply be lip service or a PR exercise to funders. We know research - however generated - should be high quality, rigorous and the results triangulated. We know it should not be about crowding out innovative new approaches. And we know that the problem is not always a lack of evidence, rather a lack of quality evidence or the appreciation for it.

Which leads to the question, what do decision makers actually want and need and when do they require this? Who do we want rigorous evidence to influence? How can decision makers quickly and easily decide what good evidence is, and more importantly, make use of this?

To address some of these challenges, there are debates about having standardised metrics, standards of evidence, kite marks and other regulatory frameworks. Are these the mechanisms needed to institutionalise rigorous evidence into decision making? What else is needed to ensure that information is accessible, useable and relevant?

Over the coming days we will be discussing the issues and wider systemic factors that can hinder and even disincentive the use of available, rigorous evidence. Although generating robust evidence is the necessary starting point, if we are to improve decision making across public services then we must make sure that this information isn't ignored.

Author

Ruth Puttick

Ruth Puttick

Ruth Puttick

Principal Researcher - Public and Social Innovation

Ruth was Principal Researcher for public and social innovation. Ruth joined the Nesta policy and research team in 2009, working on a range of projects across innovation, investment and…

View profile