Public servants have tried to reduce inequality for decades, but it persists - and there’s still a lack of robust evidence about what works.
By testing new ideas, gathering information on what works, and looking closely at the impact of interventions, an experimental approach can help to change this, improving equality, diversity and inclusion; creating more diverse workplaces, and designing policies that benefit communities equally.
At the Innovation Growth Lab, one of Nesta's specialist enterprises, we work with policymakers to design and test approaches to innovation. We gather research and insights from across the world to share and replicate promising policy ideas.
Experimentation doesn’t fix every problem public servants face. But here’s some inspiration for how to bring better evidence to decision-making in government, and to proposed solutions to improve equality, diversity and inclusion.
Experimentation is a tool to explore the unknown. We use experiments to try out a new idea and gather information on how it works and what effect it has. As a result, we need to plan and structure experiments to give us an opportunity to learn. They will often involve some form of control: we keep parts of a process constant so that we can isolate the impact of the change we make. Experiments can be used to evaluate the overall impact of a policy or intervention — or test particular elements of it to find out what works, when and for whom.
A key concern for public servants wanting to use experimentation is how to remove bias. That’s why experimenters value randomised controlled trials (RCTs) the most. These experiments remove selection bias, so we don’t confuse the impact of our programme with pre-existing differences between those who choose to take part in it and those who do not. Using RCTs makes it more difficult to cherry-pick information that supports an existing belief about the effectiveness of a policy or intervention. Questions of equality, diversity and inclusion are particularly susceptible to unconscious bias. This makes this kind of experiment especially useful.
When done well, experiments help us improve every stage of designing and delivering policies and interventions.
Before designing policies or interventions we need to understand the problem we’re trying to fix, or the solution being considered. For example, a recent experiment looked into whether foster care agencies discrimate against gay couples. Researchers sent emails about becoming a foster carer from fictitious same-sex and heterosexual couples. They found that while all applications received a response, male same-sex couples received shorter responses over a longer timeframe.
This experiment established that in this instance, discrimination occured specifically against male same-sex couples, and not all gay couples wanting to foster children. An intervention aimed at reducing bias against all male same-sex couples would therefore be more effective than one tackling bias against all gay couples.
Once we have established that a problem exists, it’s tempting to jump straight into applying a solution. Instead, we can use experiments to explore the mechanisms that have created the problem. We can expose any unspoken assumptions behind the proposed solution. For example, the Canadian federal government has tried to reduce discrimination in hiring through sponsored wage subsidies. To explore whether firms responded to the subsidies, researchers randomly sent fictitious applications to private firms, indicating that the applicant was a wheelchair user on some applications and not on others. Some applications mentioned the government subsidy for wheelchair users, while others did not.
This study found that the fictitious wheelchair user applicants were called back at just half the rate of applicants who did not disclose a disability. Mentioning the government subsidy did nothing to close this gap. This experiment helped disprove the assumption that firms were not hiring wheelchair users because they were unaware of the existing subsidy. Therefore, it’s likely that the subsidy scheme would need to be redesigned or complemented with other interventions to eliminate discrimination against people with disabilities.
Once we are confident that our proposed intervention addresses the underlying causes of inequality, there are still many design decisions to be made before we can implement it. Experiments allow us to hone and tweak the details of our programme, trying out different versions of essentially the same intervention to see which one works best. This ‘A/B testing’ approach is very popular in the private sector. It’s also led to impressive results at low cost in public policy, for example in boosting tax compliance. However, we need to be aware of the limitations of this approach.
A cautionary tale comes from recent information campaign experiments aimed at increasing the take-up of the Earned-Income Tax Credit, a federal programme aimed at low- to-moderate-income working Americans, particularly those with children. The experiments varied almost every aspect of the messages sent to eligible families — the content, design, messenger and mode. None of the experimental messages led to a meaningful increase in the share of households claiming the credit. This was a powerful reminder that serving hard-to-reach communities often requires hard work to simplify underlying government systems, rather than tweaks to implementation and delivery.
Experiments are commonly used to measure the impact of a programme — Nobel prize-winning economists Abhijit Banerjee, Esther Duflo and Michael Kremer have given some famous examples. But public servants who want to improve equality, diversity and inclusion could find inspiration in the impact evaluation of an awareness-raising campaign aimed at reducing discrimination in student evaluations.
Previous research showed that female teachers receive lower evaluation scores despite performing as well as their male counterparts. But there wasn’t enough evidence to know whether training to raise awareness of gender bias could help tackle this issue. An experiment at a French university tested the effectiveness of two types of email campaigns: one which asked students not to discriminate when evaluating their teachers, and the other sharing results from studies on gender biases in evaluations. The experiment revealed that only the second, informational intervention was successful. This provided university administrators with a simple but powerful tool to create fairer evaluation processes.
Experiments can help public servants to tackle inequality because they support them to understand the problem, design and improve solutions with the support of evidence, and evaluate the real impact of interventions. All experimentation requires forward planning and a commitment to use the findings to inform decision-making. A particularly valuable form of evidence-gathering for public servants who work on inequality issues and are concerned about biases are RCTs. But these in particular require a robust structure you design early on – they can’t be added on at the end of a policy’s design or implementation.
Learning from what others have already tried and tested is essential to improving equality, diversity and inclusion. This helps us address different dimensions of the challenge and avoid doing the same things over and over.
Experiments are just one tool of many that public servants will need to make the world a fairer place. But when used correctly, they help edge us closer towards our desired goal and have better results. We hope you’re inspired to use them in your work and get in touch with us for more information on running experiments in public service.
The Innovation Growth Lab (IGL) is one of Nesta’s specialist enterprises. Led by Nesta in collaboration with government partners, IGL works globally to promote experimentation and ensure that innovation and growth policy is informed by new ideas and robust evidence.
Nesta recently published a new Equity, Diversity and Inclusion (ED&I) strategy, which features nine goals to advance ED&I in every dimension of what we do. Read the strategy.