AI is at the heart of government strategy to enhance public service delivery, but its successful adoption requires a clear understanding of public opinion. For public sector organisations, building and maintaining public trust is key to deploying new AI tools responsibly.
Our new polling, carried out in partnership with Opinium, surveyed 2,050 members of the public. The findings highlight five critical areas of concern and support that should inform how public sector organisations approach AI adoption.
A significant portion of the UK public is not yet sold on the potential of AI in the public sector. When forced to choose between two contrasting views:
This concern is compounded by a lack of trust: just 23% agree that public sector organisations will use AI responsibly.
Simply advocating for the ‘AI opportunity’ is unlikely to change many minds, and may even turn people off. It is critical to demonstrate and communicate how AI has been developed responsibly, and how you will ensure its use minimises risks.
The public's primary fears about AI focus on fundamental issues of quality and control.
Public sector managers must inform the public about the steps taken to address these risks. Our deliberative research shows that people will adjust their levels of concern if they are informed about what has been done. But, be warned, they’ll also probably see through any bulls**t.
Public support for AI varies significantly depending on the specific public service.
The public services where AI receives the greatest support are:
A plurality of respondents oppose the use of AI in:
This demonstrates why a context-specific approach to public engagement is required. Learning more about a specific tool can also help people come to a more nuanced view. For instance, our deliberative research found that 83% felt positively about social workers using a note-taking tool like Magic Notes.
A majority of the public feels they must be consulted before AI is introduced in the public sector.
Respondents also want the consultation to be timely: 27% believe it should come while the tool is being tested and another 28% believe it should happen after the trial but before the full rollout. This highlights the need to find the right time to consult the public, who will also ask questions about the evidence, cost, and reallocation of any savings.
Public sector procurement teams and finance directors might not like this, but when evaluating new AI tools, the public clearly prioritises a democratic mandate and legitimacy over financial considerations.
Nearly half of respondents (46%) believe public support should be the more important criterion for evaluating AI tools, compared to just 18% who prioritise financial considerations.
This finding underscores the need to demonstrate that genuine engagement has taken place, securing a clear ‘social license’ or mandate before committing to a tool.
Nesta's AI Social Readiness advisory label process has been designed to meet these challenges by providing independent social assurance. It uses deliberative polling to involve over 100 representative members of the public in assessing the social acceptability of specific AI tools for public services.
Find out more about two AI tools that have already been through this process:
If you’d like to know more, get in touch on [email protected].
Note: We worked with Opinium to poll 2,050 UK adults from their Political Omnibus panel. The sample was weighted to nationally representative criteria. Fieldwork took place 19-21 November 2025.