Policymakers often face immense pressure, navigating limited time and endless, fragmented evidence. Meanwhile, recent advances in AI have raised the possibility of fast and reliable evidence synthesis.
What if there was an AI-powered tool that allowed policymakers to rapidly identify the most effective policies to tackle any given social challenge, based on the latest evidence available from academic and grey literature?
This is what we are trying to achieve with Policy Atlas. Leveraging these advances, the tool automates complex analyses to enable policymakers to seamlessly find policy ideas backed by robust evidence.
In this project update, we share the key insights from our user research process, and explain how these are shaping the future development of Policy Atlas.
To get a clear picture of the typical tasks and challenges that policymakers face, we conducted semi-structured interviews and demo sessions with individuals experienced in policy and analytical roles across central and devolved governments. This included the Foreign, Commonwealth and Development Office, Department for Science, Innovation and Technology, and the No 10 Policy Unit.
Through this process, we identified two distinct user personas for the Policy Atlas tool:
1. The policymaker, who is often time-poor and wants quick, top-level results
“[As a user, you] would want to know the answer to my question quickly, rather than having to open [the source] and get into it yourself”
Former special adviser to the Prime Minister on education
“You can imagine a world where you have a meeting in two hours, and you want to bring it [the key evidence] together in a logical way”
Former special adviser to the Prime Minister on education
2. The analyst, who wants to drill deeper into specific results.
“The use of this tool could be on bigger questions where there’s more space to develop an understanding. For example, when you have a couple of weeks to look at what’s been done, what different countries are doing, and have some commissions about the evidence base.”
Senior economist, Foreign, Commonwealth and Development Office
Across both user groups, we also identified several common needs and challenges that the tool could support users to overcome. For example:
Users, regardless of their technical ability, require some support formulating their search query to get the quality of information they need.
“Part of the barrier is just formulating the search question for what you want to ask.”
Nesta mission analyst
Knowing what interventions or approaches that other governments or organisations are trialing isn’t enough to support effective policymaking. The tool must enable policymakers to quickly understand how well a policy works in practice, how evidenced it is, what it might cost, and what the resulting recommendations are so that they can apply the search results to achieve their policy goals.
“I want to know the top thing the report found, why it happened, and the literature beneath it. I want to know the precise outcome [of the intervention]”.
Former special adviser to the Prime Minister
“Policymakers are often having to translate evidence into the ‘so what’ very fast. So constructing it [the tool] in a way such that they can already give three top lines is really helpful. Translating from detailed evidence to telling the story at pace is where it [the tool] could help.”
Former special adviser to the Prime Minister in No 10 Policy Unit (previously Cabinet Office)
Policymakers want to know how exhaustive the available evidence is, as well as in which contexts, in order to know how applicable it is to their specific context.
“The challenge is comparability when looking across countries - that’s just difficult.”
Former special adviser to the Prime Minister
“A key challenge is around finding the thing you wanted to know and then relating it to the UK context.”
Former special adviser to the Prime Minister
Finally, as with all AI-powered tools, policymakers want to know with certainty that the evidence they are using to inform important decisions is robust and dependable.
Many users of Policy Atlas are likely to cover aspects of both the ‘policymaker’ and ‘analyst’ personas, largely depending on the project they are working on and the specific time constraints associated. This is certainly true for internal users of the tool at Nesta. However, the crucial takeaway is that both user profiles have the same core need - they want a ‘blueprint’ to solve any given social challenge which identifies key interventions, and then ranks them in terms of the evidence, strength and impact. As such, Policy Atlas will be useful for both quick (and reliable) insights through explainable AI, as well as for deeper research that is conducted over a longer period of time.
Comparable to Nesta's blueprint for halving obesity and the EEF’s Teaching and Learning toolkit, Policy Atlas will generate a succinct table of interventions for each evidence search. For each intervention, it will determine the associated evidence strength and reported or predicted impact of the intervention. In the near future, we also hope to develop an approach to estimate cost-effectiveness of each intervention.
While it is possible to generate an initial list of interventions in a matter of minutes, suitable to the ‘policymaker’ use case, the tool will also enable users to delve further into specific evidence. We aim to add support for deeper searches that run for longer time periods and include much greater amounts of evidence (eg, thousands of papers, as opposed to tens or hundreds) and allow users to steer and collaborate on the evidence synthesis process, catering to the ‘analyst’ use case.
We have translated the key needs explored above to the following features, which are under development:
We’re also carrying out a series of evaluations to test the efficacy, accuracy and inclusiveness of Policy Atlas’s results. For example, we’re replicating some of the systematic evidence reviews we have completed for previous Nesta projects (such as the scaling parenting interventions project) using the Policy Atlas tool, and doing a direct comparison of our results. We will conduct these evaluations in the open and share them with our users.
View an updated ‘alpha’ version of Policy Atlas below, incorporating some of the user feedback. We will continue to iterate and improve the tool.
Updated ‘alpha’ version of Policy Atlas
The degree of positive feedback we received throughout this process has solidified the enthusiasm and desire for a tool like Policy Atlas. Our user research has also helped us to further understand the wide range of potential applications for such a tool - from supporting specific tasks to fundamentally changing the way policymakers interact with evidence. In the immediate future, we’re particularly excited about developing the capability to identify which interventions work - helping us to achieve Nesta’s missions and also benefit policymakers.
“I can think of tons of scenarios it’d be helpful for. Firstly, at the problem definition stage… it would be helpful to sense check assumptions”
Former special adviser to the Prime Minister in No 10 Policy Unit (previously Cabinet Office)
Another user suggested running a search query live in meetings, to “boost political confidence and boost minister’s confidence”.
Former special adviser to the Prime Minister
Most notably, one participant noted: “Anything that helps people become more critical users [of evidence] will become game-changing in the civil service”.
Former special adviser to the Prime Minister
While this is promising, it also presents a challenge in determining the tool's most effective use cases. Even with our two distinct user personas clarified, there are still many nuances between users within these two broad groups.
We’re focusing on finishing an alpha release of the tool and testing it again with a core group of users. This is to ensure that we deliver its core value to our two key user groups: rapidly identifying effective, cost-efficient and strongly-evidenced policies for social challenges. We will also present a demo to a wider range of policymakers at Policy Live.
We are publishing our technical work-in-progress on GitHub. If you would like to collaborate with us, please get in touch!
Our aim was to understand their most common workflows and identify the most promising opportunities for an AI powered tool like Policy Atlas to positively disrupt and enhance their work.
We followed a three-step process:
We iterated on our interview structure as we progressed with the research. For example, we quickly realised that time saving was a consistent need and desire for all prospective users. Instead of repeating this learning, we reframed our interview template to ask: “What would need to be true from the outputs in order for the tool to actually save you time in practice?” and “What would encourage you to turn to this tool over existing ones?”.
View an early version of the Policy Atlas that we used for the user interviews