Research suggests AI could boost the UK economy by up to £400 billion by 2030. You may be sceptical of such grand promises, but this much is certain: any economic gains from AI will depend on widespread workforce adoption.
BIT’s recent AI & human behaviour report details how and why behaviour will determine AI adoption. A key insight is that adoption isn't binary, but rather a continuum from non-use to deep integration. The challenge isn't simply rolling out tools, but helping people and organisations progress along that journey.
At Nesta and BIT, we help partners across the public, private and third sectors tackle the challenges they face, including AI adoption. But we're also navigating this ourselves. Since ChatGPT's release in November 2022, we’ve gone from ‘zero’ GenAI to ‘deep’ adoption across our own organisations.
While we’re still progressing, a recent survey of over 150 staff across BIT and Nesta found that 9 in 10 of our people already use AI weekly for work - compared to a UK white-collar average of 68% - with over half of our staff using it multiple times a day.
Comparison of GenAI usage: UK white-collar workwers vs Nesta and BIT staff
Below are some of the lessons we learned from our internal journey to high rates of AI adoption, which we're sharing so that others can draw from them to support their own teams to use it impactfully.
From day one, BIT and Nesta embraced GenAI experimentation. This wasn't the obvious move it might seem in hindsight. In early 2023, many large companies banned ChatGPT over data security fears. We took a different approach: establish guardrails, then encourage exploration.
We quickly rolled out an AI policy setting clear boundaries on what could be inputted into GenAI tools and kept staff accountable for the quality and accuracy of their work. Within those limits, we pushed staff to test AI for every possible use case and share their wins and failures widely.
An internal Slack channel for sharing AI use cases launched around ChatGPT's release, and quickly became our busiest. We also ran an internal hackathon in early 2023, giving staff dedicated time to create time-saving and innovative use cases. Early examples included AI-assisted literature reviews, synthetic polling to simulate research participants, prototype AI tools for early years professionals, and using AI image generation for real-time co-creation with participants (see below).
AI image generation has come a long way since January 2023. Prompt used: ‘A young man doing solar panel maintenance’
This grassroots approach was critical at a time when AI's value to our work was still uncertain, and top-down directives wouldn't have driven the innovation we needed.
While staff-led experimentation drove early success, data protection concerns remained a barrier to AI use. Despite policies on how to use AI tools, concerns about making mistakes held people back. In a previous survey earlier this year, over 40% of respondents named data security as a key barrier to using GenAI in their work.
Central to addressing this was rolling out Gemini Enterprise access to all staff. While staff remained free to use other tools, the rollout provided an approved option with robust data governance, removing the burden of risk assessment for those new to using AI at work. Our latest survey shows it worked. Six months later only 10% of staff remain concerned about data security when using approved GenAI tools.
A secondary barrier identified in our earlier survey was the lack of clear use cases. Early experimentation revealed high-impact applications, but these hadn't spread widely through the organisation. To address this, we formed an AI sprint team of our keenest AI champions across all seniority levels giving them dedicated time to scale what was working.
The team accompanied the rollout of our enterprise AI tool with an internal communications campaign highlighting concrete examples staff could try immediately, and joined team meetings to share proven use cases directly. Our most recent survey showed the share of staff citing lack of use cases as a barrier had been cut in half.
Beyond rollout, the team also explored ambitious new ways to integrate AI into our work across the wider team, and strengthen our research. Notable recent successes include: an AI data analysis workflow that saved time on routine quantitative tasks; a survey chatbot that collected richer and broader feedback in our staff survey; and free-text categorisation of 113k anonymised school incident reports that saved 30 days of staff time by auto-categorising all but the edge cases.
Beyond research and mission-led use cases, we launched the ‘AI Sandbox’ initiative, exploring how our Corporate Service teams could experiment with GenAI to transform their most common workflows. We’re now trialling an array of Gemini ‘Gems’ (similar to Custom GPTs) and agentic AI applications to support our People and Executive Assistant teams to speed up manual routine processes like staff onboarding, scheduling meetings and action tracking.
Pre- and post-training surveys proved the desired impact. Participants gained confidence using AI to complete simple tasks more quickly (like scanning relevant literature) and to tackle more complex tasks they wouldn't have attempted before (like developing first drafts of research methodologies for expert review).
The hands-on practice also fostered a more balanced view of AI risks. Whilst participants maintained appropriate concern about verifying factual accuracy of outputs after the training, their broader worries about the quality of outputs produced collaboratively with AI decreased. In short, they became more likely to trust AI to enhance their work, not compromise it.
While 75% of our staff report improvements in productivity, we recognise that there’s a difference between feeling more productive and consistently translating that into project outcomes. Putting on our empirical hats, we’re thinking about how to actually measure the true RoI of deep AI adoption now that we’re hitting a ceiling of baseline usage.
We’re also launching a training course through BIT Academy, to support leaders to apply behavioural science to drive meaningful and deeper AI adoption in their organisation. ‘Accelerating AI Adoption with behavioural science’ brings together behavioural and organisational science, the latest AI research and practical experience including concrete lessons from our own AI journey. You can find out more and sign up here.
And if you’re interested in hearing more about the four part ‘BotCamp’ course we delivered internally or would like us to adapt it for your organisation, just get in touch.
And if you’re interested in hearing more about the four part ‘BotCamp’ course we delivered internally or would like us to adapt it for your organisation, just get in touch.