Public agencies in the UK are increasingly encouraged to take advantage of AI to improve public services, while also acting as a role model for using AI responsibly and meeting public expectations around equity, fairness and transparency.
Despite the recognition that public trust is fundamental to the successful adoption and use of AI in the public sector, evidence from public polling suggests that there’s a long way to go. The public is pessimistic about AI’s impact on society, particularly in developed markets.
In this project, we piloted a novel process for AI assurance - involving the public in assessing the social acceptability of the risks posed by specific AI tools. This fills a critical gap in the current landscape of services being developed to test and evaluate AI systems.
Taking a deliberative polling approach, we asked people to weigh up the benefits and risks of specific AI tools and provide guidance on what uses are socially acceptable. Using these insights, we created an easy-to-understand Social Readiness Advisory Label that can be used by public sector staff as they navigate decisions about AI.
We wanted to involve members of the public in decision making about how AI is used in the public sector, and to support public sector staff to be confident that they are using AI responsibly with a public mandate.
Ultimately, we hope this approach will help to build public trust in how AI is being used in public services and move us closer to a future where AI works in the public interest.
We think that AI tools should be developed with input from a diverse range of stakeholders. This will help us ensure they are more accurate and appropriate for the task at hand, and that we avoid the worst harms as much as possible. We call this approach participatory AI.
Our previous work on participatory AI focussed on applying collective intelligence methods at the early stages of the AI development pipeline. For example, crowdsourcing more diverse datasets for training AI models and using participatory design with frontline users to decide which problems AI tools should focus on.
With the emergence of foundation models, also called general-purpose AI, it’s become more difficult to influence these earlier phases of AI technology development. A handful of companies are developing the AI models that the majority of other AI tools are built on. Our people-centred approach to AI governance helps to bring diverse voices to the table during the deployment of AI systems.
This is particularly important in a public sector context, where AI tools that are inaccurate, biased or simply not used as intended have the potential to cause significant harm.
We hope our methodology can help the UK public sector harness the potential of AI tools to improve the efficiency and quality of public services while ensuring this is in line with public values and builds trust, transparency and accountability.
In 2025, we developed a proof-of-concept for an AI Social Readiness Assessment and Advisory Label. We piloted this process with two specific tools: Consult, a tool developed by the government’s AI Incubator, for analysing public consultation responses and Magic Notes, a tool by the social enterprise Beam, designed to reduce the administrative burden on social workers.
Overall, we ran a series of 36 deliberative workshops across the UK, engaging a total of 281 members of the public and social care service users in a "Public AI Task Force." Participants used our digital platform, Zeitgeist, to move through a structured "mission" that included immersive educational videos and expert-led deliberation. Each pilot culminated in the creation of an AI Social Readiness Advisory Label, providing a transparent, easy-to-understand summary of public confidence and recommended safeguards:
- AI Social Readiness Assessment and Label for Magic Notes
- AI Social Readiness Assessment and Label for Consult
Ultimately, both tool developers accepted all of the recommendations from the public. We also found that taking part in the process, led to an increase in trust and confidence in public sector use of AI - people value having a say as part of the AI Social Readiness Assessment.
Our ambition is for the AI Social Readiness process and label to become a gold standard for bringing in the voice of the public when assessing whether new AI tools are ready and acceptable for use in public services.
If you have an AI tool intended for use, or being used, in UK public services and you would like it to go through the AI Social Readiness Advisory process or explore other ways for engaging the public in AI development and oversight, please get in touch by emailing [email protected].
This project was funded through a grant from the Future of Life Institute.
We are grateful for the support of our advisory group, that includes Greg Ashton (DWP), Rachel Astal (Beam), Vikie Bew (i.AI), Marcial Boo (Institute of Regulation), Joe Cuddeford (EPSRC), Cecily Morrison (Microsoft Research), Ben Morrin (NHS England) and Max Scantlebury (Milltown Partners).