How does experimentation work?

We often don’t know how innovations will work out so experimentation means that any solution can be treated as a work in progress, which tweaking and tinkering can improve. They allow organisations to explore new solutions, reducing wasted time and resources on initiatives that do not work.

There are several ways to experiment, depending on the context. Sometimes it can involve working closely with the people who will use the end solution to see how an innovation works in real-life, like prototyping. Other times, it can involve using more robust evaluation methods – such as Randomised Controlled Trials (RCTs) – to test an idea and create evidence to support it.

Experiments should always be designed in a way that best answers the question or hypothesis set.

Broadly speaking, there are four types of experiments we can use to test and learn about innovation – these are set out below.

1. Randomised evaluations

Randomised controlled trials (RCTs) are often used to estimate the impact of an intervention or programme. They work a bit like science experiments but can be carried out in the real world, to test social, health or innovation policies. RCTs give us the most certainty that what we’re doing works, or not. They use a control group to compare people who have received an intervention with similar people who didn’t. To make sure the groups being compared aren’t biased, they are randomly assigned.

Other methods – called quasi-experiments (QEDs) – replicate this model where real randomisation isn’t possible, by using statistical techniques to construct a comparison group. These designs are traditionally used in impact evaluation – J-PAL, for example, has run 922 randomised evaluations in 81 countries to find out what works in fighting poverty. RCTs are also being used by the Behavioural Insights Team (BIT) – to understand the behaviour of individuals and firms. In the UK, RCTs and QEDs are building a robust evidence base to make government and policy more experimental.

2. A/B tests

A/B tests use RCTs to test small tweaks in design or implementation, often online. An A/B test is a simple way of finding out what users prefer. They randomly assign a particular design or communications strategy – the wording of a letter or the header of a website – to different groups of users. We can then compare response rates to see what was most effective. A/B tests are used in business all the time, and as ‘nudges’ to improve the design and delivery of services. A/B tests show the range of questions we can use randomised experiments to answer.

3. Rapid-cycle experiments

Rapid-cycle experiments are a new way of improving how innovations are designed and implemented. They aim to set up better feedback loops between real-time learning from delivery and the top-down implementation of programmes. In the US, the Frontiers of Innovation platform at the Harvard Centre on the Developing Child is developing science-based innovations through rapid-cycle iterations. Rapid experiments enable teams and organisations to work together to solve problems, collaborating to design, implement and test small changes in short loops that provide data on whether innovations are producing better results than ‘business-as-usual’. In the UK, Dartington Service Design Unit is trialling a new method for ‘Accelerated Design and Programme Testing’ in its work with the Family Nurse Partnership.

There are different ways to learn from rapid-cycle experiments that give us different degrees of certainty in their effects. Quantitative monitoring data and RCTs can be used to estimate impact on key outcomes and qualitative methods provide crucial feedback from staff and stakeholders. Using a comparison group helps increase confidence in the results of rapid experiments. Some evaluators are now using ‘nimble’ randomised trials to test rapid-cycle changes more rigorously.

4. Design exploration and experimentation

Design exploration and experimentation takes an experimental approach to developing or testing innovations, often in their earlier stages. At the earliest stages when new ideas are being explored, trying out new and different frames helps develop ideas and reframe them as hypotheses. Tools like speculative design and horizon scanning can be used in this ideational phase to adopt an ‘experimental mindset’.

Once an idea has been developed, it can be tested experimentally. For example, a prototyping experiment is where a model of an innovation is tested with users. Qualitative or data-led methods can be used to learn from these experiments. Unanimous AI has used AI systems experiments to test the power of collective intelligence predictions. Demos Helsinki is using a human-centred approach to strengthen government’s experimentation capacity – experiments now have an official role in Finnish policy design. At any stage of innovation, working with an experimental mindset encourages us to test, learn and refine as we go.

Nesta's work on experiments

Nesta was an early promoter of experiments in government and public policy. Today, we work with governments around the world to help them to experiment more effectively.

In 2011, we published State of Uncertainty, a report calling for a more experimental approach to be taken in innovation policymaking. The idea was that innovation policy would work better if it were modelled on experimental science and focused on minimising the uncertainty entrepreneurs face.

Then, in 2013, Nesta led on one of the first randomised controlled trials (RCTs) in business support to assess the effectiveness of Creative Credits – a programme linking small firms with providers from the creative industry. The resulting report revealed important insights about the policy that normal evaluation methods used by government would not typically show.

In 2014, Nesta supported the Behavioural Insights Team (BIT), joining in a partnership with its employees and the Cabinet Office. Also known as the government’s ‘nudge unit’, BIT has become one of the UK government’s most successful innovation labs, focusing on using behavioural insights to improve public policy. Since becoming an independent organisation, BIT has continued to grow, and has expanded to the US, Singapore and Australia.

In the same year, Nesta launched the Innovation Growth Lab (IGL). This global partnership brings together governments, foundations and researchers to develop and test different approaches to support innovation, entrepreneurship and growth. It aims to make policy in this area more experimental and evidence-based, while also building the capacity of people and institutions to conduct their own RCTs.

Since it was founded, IGL has funded over 30 RCTs in innovation, entrepreneurship and business growth, and partnered with more than a dozen organisations to promote the use of RCTs in this field. IGL also hosts a global conference each year, bringing together senior policymakers, practitioners and researchers from across the globe to explore future innovation and entrepreneurship policies. In parallel with this work, Nesta’s Innovation Skills team works to promote an experimental culture within governments. Most recently, it initiated States of Change – a programme working with progressive governments and public innovation practitioners around the world to strengthen their capacity to experiment.

Case study

Creative Credits: A randomised controlled industrial policy experiment, is a Nesta study that used an RCT to see if a novel business support scheme connecting small businesses and creative providers to boost innovation was effective.

The pilot study, which began in Manchester in 2009, was structured so that vouchers, or ‘Creative Credits’, would be randomly allocated to small and medium-sized businesses applying to invest in creative projects such as developing websites, video production and creative marketing campaigns, to see if they had a real effect on innovation.

The research found that the firms who were awarded Creative Credits enjoyed a short-term boost in their innovation and sales growth in the six months following completion of their creative projects. However, the positive effects were not sustained, and after 12 months there was no longer a statistically significant difference between the groups that received the credits and those that did not.

Nesta published a report on the Creative Credits study, which argued that these results would have remained hidden using the normal evaluation methods used by government

Further resources