This section contains all the resources that local authorities need to run their own randomised controlled trials to increase childcare take-up.

Experimentation is very common in the private sector. Companies such as Google and Amazon regularly run online A/B tests to identify what works best on their platforms. It is not as straightforward to do this in government as it is on a website, but it is possible – especially when it comes to testing communications. Local authorities send out lots of letters and other communications. We think that they should be aiming to adopt a culture of regular experimentation, trying out different versions and seeing what works best.

In the first part of this module, we lay out the reasoning for why it is important to evaluate policy initiatives at all. The second section goes through the steps of how to do this in practice.

Randomised controlled trials (RCTs)

The most reliable way to know whether or not a policy initiative is working is to conduct a randomised controlled trial (RCT). They have been common for a long time in medicine and have become more popular in recent years as a way to evaluate government policies.

Historically, the most common way to evaluate a policy was just to implement it and then observe what happened. For example, you might introduce a new programme for unemployed job seekers and then measure how quickly they found work.

This approach has a lot of problems that mean that, at the end of the evaluation, you’re very unlikely to know whether or not the programme worked.

Problem 1: Factors outside your control affect your outcomes. For example, the number of people finding jobs is affected by the state of the wider economy. A sudden recession might make it seem like your policy is ineffective – even if it actually works well.

Problem 2: Selection bias. If you introduce an optional job training programme and compare outcomes for those who choose to use it to those who choose not to, you will get biased data. The people who use the programme are likely to be more motivated and engaged and they might have found jobs more quickly anyway.

The value of RCTs

Randomised controlled trials get around these problems altogether. In a trial, we take a group of people (to stick with the example above, unemployed job seekers) and randomly divide them into groups. One group gets the new job training programme and the other does not. At the end, we can compare outcomes to precisely estimate the impact of the programme. Because the groups are randomly assigned, we can assume any differences between them at the end are because of the programme, and not another factor.

Scared Straight is a programme designed to deter young people from crime. Participants in the programme see what prison is like by meeting serious offenders serving time. The hope was that, after this, the idea of a criminal lifestyle wouldn’t seem as appealing.

Several early studies of Scared Straight suggested positive results. But none of these studies had a control group showing what would have happened to participants if they hadn’t taken part.

When researchers began to conduct more rigorous randomised controlled trials, they found that Scared Straight wasn’t just ineffective – it actually increased crime compared to no intervention at all.

Authors

Louise Bazalgette

Louise Bazalgette

Louise Bazalgette

Deputy Director, fairer start mission

Louise works as part of a multi-disciplinary innovation team focused on narrowing the outcome gap for disadvantaged children.

View profile
Dave Wilson

Dave Wilson

Dave Wilson

Advisor

Dave is an Advisor in the Education team at the Behavioural Insights Team (BIT) with a focus on early years projects.

View profile
Fionnuala O’Reilly

Fionnuala O’Reilly

Fionnuala O’Reilly

Lead Behavioural Scientist, fairer start mission

Fionnuala is the lead behavioural scientist in the fairer start mission and is currently seconded from the Behavioural Insights Team (BIT) until March 2023.

View profile