Hasan Bakhshi - 27.02.2012
Earlier this month I participated in a fascinating panel discussion at the Institute for Government (IfG) on the topic of using experiments to inform policy.
The keynote speaker, Rachel Glennerster, co-founder of the Abdul Latif Jameel Poverty Action Lab at MIT, made a compelling case for the use of randomized controlled trials (RCTs) in evaluating policy. By assigning a policy intervention on a random basis to a treatment group of individuals or organisations, any differences in outcomes compared with a control group can be attributed solely to the intervention.
RCTs are often criticized for being inflexible, impractical and expensive; Rachel's varied examples showed how these criticisms do not hold water most of the time. I used my own comments on the panel to offer some personal reflections on experiments, drawing on my work at NESTA on business innovation.
Creative Credits: an experiment in business innovation policy
Anyone who works in business support will know of the acute self-selection biases that afflict policies to support innovative businesses. This is the concern that the businesses who apply for such schemes are in any case likely to be more innovative than the firms that policymakers really want to target. If this is the case, how can the value of the programme be proven?
I illustrated with NESTA's Creative Credits programme. This was an innovation vouchers scheme that we piloted in Manchester in late 2009. SMEs were invited to apply for innovation vouchers called Creative Credits. Those SMEs awarded Credits - the treatment group - could use them to buy in £4000-worth of services from a creative service business of their choice. Those SMEs not allocated Creative Credits became the control group for our study.
The Creative Credits were allocated on a strictly randomized basis, meaning that any systematic differences in the innovation performance of the treatment and control groups could be attributed to the creative project. The scheme proved popular: on some measures, over 1 in 8 of eligible Manchester's SME population applied for a Creative Credit.
We compared the characteristics of the applicant businesses with those of SMEs in the wider Manchester population and confirmed that they had a much greater tendency to apply to innovation support schemes, and on a number of measures appeared to be more innovative. What randomized distribution of the Credits allowed us to do was to condition on this and yet still test whether the scheme had an additional impact on innovation performance.
We found that firms awarded Creative Credits reported very high levels of additional innovations at the point at which the creative projects were completed; but our latest findings suggests that in many of these cases these impacts had dissipated within twelve months of project completion. This demonstrates how policymakers tracking short-term impacts alone may draw severely biased inferences. We will be publishing these findings, and exploring what they mean for the scheme's net impacts, in a new research report on the Creative Credits project this Spring.
RCTs and innovation policy through experimentation
A further strength of the RCT method is that, by design, it involves the collection of data that is fit for the purpose of answering our research questions. That is especially important in policy areas like my own - the creative and digital industries - which are so poorly served by official data, and where there is a suspicion that even if the official data could be made more relevant, the burning research questions will have moved on before the official data have had time to adapt.
However, when it comes to innovation policy, the RCT should be seen as a dynamic tool not a static one. In many policy areas, the RCT is implicitly viewed as a tool for identifying the preferred intervention. When an effective intervention has been identified through piloting, the policy - it is argued - should be rolled out widely.
In more stable areas of policy, this picture of how evidence-based policy works probably makes sense. But it feels uncomfortable in dynamic settings like innovation, where policymakers are dealing with extreme levels of uncertainty.
Here, as discussed in my State of Uncertainty provocation with Alan Freeman and Jason Potts, the policymaker's challenge is better characterized as one of continuous learning. There is a need for strong policy feedback loops, including continuous controlled experiments to show what features of an intervention work better than others, the insights from which are fed back rapidly into policy design.
Politics and the time inconsistency of evaluating policies
So if controlled experiments are so desirable, and we have concrete examples of how they can be used to improve policies, why do we not see much greater use of them in policy development?
To answer this question, we need to consider the incentives facing decision-makers. Ministers and senior civil servants are under constant pressure to deliver 'good news', and often only stay in their post a couple of years before moving on. It may therefore be more attractive for them to highlight short-term measures of success, even if these may not indicate sustained, additional improvements, than to rigorously test the long-term effects of policy.
Seeing Alan Budd, a founding member of the Bank of England's Monetary Policy Committee and ex-chairman of the Office for Budget Responsibility, in the audience reminded me how in monetary and fiscal policy, institutions such as independent central banks and fiscal watchdogs have evolved to address this problem of 'time inconsistency'. Perhaps there is a case for equivalent institutions - such as independent evaluation offices or policy observatories - which can address the problem of time inconsistency in evaluating policy?
Add your comment
In order to post a comment you need to
be registered and signed in.
04 Mar 12, 5:28pm (1 years ago)
How do we know what works?
On my blog on the Technology Strategy Board site I have checked a few inventors, innovators and founders of the major global giants of the past and those who have recently risen to fame & fortune. It's titled "Why _connect is not succeeding" https://connect.innovateuk.org/web/engineertony/blogs
The very obvious conclusion is that none of these people started as a result of a government "initiative", very few had any contact with university academics, or of policies developed from deep analysis, or randomised control trials, and all had problems raising capital. In over one hundred years very little has changed.
From my own experience over 50 years in heavy engineering and production, I can see that the inventors, innovators & creative thinkers are there but they cannot be distinguished from the more educated & articulate time wasters already entrenched in positions of influence. Consequently the real long term innovators are blocked from access to funding and ever putting their skills & experience to use.