About Nesta

Nesta is a research and innovation foundation. We apply our deep expertise in applied methods to design, test and scale solutions to some of the biggest challenges of our time, working across the innovation lifecycle.

Testing AI-powered interviewers with vulnerable groups

AI-powered interviewers have the potential to help in reaching larger and more diverse groups of participants. The nature of AI means these interviews can be quick, accessible, and flexible - and therefore easier to fit around busy lives than a standard interview.  It can mean hearing from those who often don’t take part in research due to their limited time, because they face language barriers, or because they feel more comfortable sharing sensitive experiences with an AI than with a human interviewer.

But there are caveats. With AI-powered interviews still being relatively new, it’s not yet clear whether the participants reached through this medium will be different to those reached through traditional interviews, and whether they will share more - or less - with an AI than with a human researcher. And the most urgent questions are around ethics, and specifically how to conduct AI interviews safely and appropriately, especially with vulnerable groups.

As part of the fairer start mission, we recently tried using AI interviews with parents of young children in London to understand both its potential and its limitations. We’re hopeful that this approach could enable us to reach many more parents than traditional methods (something like 200 rather than 20-30), and give us richer insight into families’ diverse experiences and needs.

How does it work?

The AI-powered interviewer functions like a guided conversation: it asks respondents a series of open and closed questions, adapting follow-ups based on their responses. Families can reply by text or voice note, and they can choose which language to be interviewed in, with automatic translation built in. Safeguarding features and escalation triggers ensure that any concerning responses are flagged for review by a human researcher.

Nesta has used AI-powered interviewers before, for example, to explore the experiences of people who had heat pumps installed. But this was the first time we tested the approach on more sensitive topics, with a potentially vulnerable audience.

The appeal of an AI-powered interviewer is that it makes research more accessible and scalable. Families can take part at a time and pace that suits them, in their own language, and without the pressure of speaking directly to a researcher. This flexibility allows us to hear from many more families than traditional interviews would, while possibly helping parents feel more comfortable sharing personal or stressful experiences with a non-judgmental AI.

That said, using AI in such a sensitive context raises important questions about trust, timing, ethics, and emotional safety - this is especially the case if we are interviewing new parents who may have experienced birth trauma, postnatal depression, or other challenges. Our central focus therefore became not just whether families would engage, but whether the AI could do so responsibly.

What is the process of testing?

Before speaking to families, we confirmed with our partners and academic collaborators that it was appropriate to test this AI-powered interviewer in a sensitive context. We then recruited five participants who matched the demographic we wanted to reach, using existing contacts eager to share their views and help shape the AI-powered interviewer. Each took part in a 45-minute online conversation and received a gift voucher as a thank you. Our goal was to understand whether it could be a trusted feedback channel, what would build confidence in it, and how families expected it to work.

From previous projects using AI-powered interviewers, we knew the technology’s strengths (empathetic, dynamic replies), and its limitations (giving short, surface level answers when prompts were not precise and struggling with vague or confrontational responses). Safety measures were already in place, including pause, abort, and redirect controls (for both parents and the AI-powered interviewer), real-time safeguarding flags, and the ability to share trusted resources (eg, phone numbers for relevant helplines) mid-chat.

What we did not yet know was how comfortable families would feel answering sensitive questions, or what support or reassurance they would need to engage openly.

To test this, we developed a prototype AI-powered interviewer that gradually increased in emotional intensity. It began with simple factual questions (“How old are your children?”), progressed to reflective prompts (“What’s something you’ve done with your child recently that made you smile?”), and finally explored more sensitive topics, such as stress or birth experiences. For this initial test, participants were not asked to share their own experiences. Instead, we explored how they felt about an AI-powered interviewer asking these questions and how comfortable they might be engaging with it.

The five parents generally saw the AI-powered interviewer as a safe, non judgemental space for sharing personal experiences. There is a risk that participants misunderstand the AI-powered interviewer’s capabilities or assume that it could provide emotional or therapeutic support via the interview. This is a key ethical consideration: it's important to communicate clearly what the AI-powered interviewer could and could not do, set clear safeguarding boundaries, and ensure processes are in place to respond in real time if a participant disclosed something requiring immediate attention or support.

In parallel, we ran a Doteveryone-inspired workshop with the project team and project partners to systematically map out all the potential intended and unintended consequences of using the AI-powered interviewer with our intended audience. The team identified over 50 possible consequences, both positive and negative, and together, we decided on the highest-risk areas that we need to address before moving forward. These included: 

  • emotional disclosures without appropriate follow-up can lead to distress, particularly if the participant believes the bot will take action in response
  • limited capacity, compared to a human interviewer, to identify potential safeguarding risks or respond with empathy and appropriate follow-up
  • respondents misunderstanding the AI-powered interviewer’s role, mistaking it for a support service or a direct line to our implementing partners’ services, rather than a research AI-powered interviewer
  • participants rushing through responses to receive an incentive, potentially reducing depth, honesty, or the usefulness of the feedback

What did we learn?

While there are risks to using AI-powered interviewers, many of these can be addressed through design and development. For us, there were four key takeaways:

1. Establish ongoing processes to monitor risk

The AI-powered interviewer flags anything that raises safeguarding concerns, and these responses are then reviewed by a human. We tracked the proportion of flagged responses that were reviewed during our content audits, and set a monthly audit cycle to ensure issues can be caught and addressed promptly.

2. Clearly communicate with participants about the use of AI and their data 

Transparency is important, and throughout the interview participants should have clear expectations about what the AI-powered interviewer can and can’t do. Including explanations about this into the interview scripts themselves could help to address some risks. However, there are likely additional high-risk consequences and other dimensions that will need to be explored and tested in any specific context where an AI-powered interviewer is being used.

3. Thoroughly test the AI-powered interviewer 

Based on the back of our initial interviews, we’ll be creating a second prototype with improved transparency and safeguarding. We will rigorously test and quality assure this new prototype, working in partnership with an academic ethical review board. The testing will include running a series of mock interviews designed to uncover potential risks. We’ll walk through every possible response path, simulate interviews to assess tone, pacing and conversational flow, and run safeguarding scenario drills to ensure any concerning disclosures would reliably trigger escalation to a human researcher. 

We’ll also stress-test edge cases, such as repetitive, unclear, or silent inputs, and review for inconsistencies or loops (eg, the AI-powered interviewer getting ‘stuck’) in the AI-powered interviewer’s behaviour. Each round of testing will help us refine prompts, tighten fallback messages, and strengthen escalation logic.

4. Ensure safeguarding concerns are escalated to a human

Even with mitigations, AI-powered interviewers might still miss some serious issues, and high-risk language (such as references to self-harm or domestic violence) should be flagged for immediate human review. We will use a simple text scan mechanism that will enable these messages to be flagged.

Exploring responsible AI

While AI-powered interviewers are a promising approach to reaching more participants and finding out about the experiences of more diverse groups, there are still risks and limitations to be explored. Before any AI-powered interviewer is used with participants, it needs to be rigorously quality assured; any risks mapped and analysed; and prototypes tested. If these steps are followed, there is the potential for AI to safely guide participants through emotionally complex conversations, without overstepping its remit and with safeguarding concerns automatically flagged to a human reviewer.

We are increasingly confident that we can use AI in ways that empower, not exploit, and that extend reach without compromising duty of care and ethical research practices.

If you are exploring similar questions or building responsible AI tools for research and social good, we’d love to hear from you.

Get in touch at [email protected] or follow our journey as we continue testing AI interviews with new parents.

Author

Max Blore

Max Blore

Max Blore

Designer, Design & Technology

He/Him

Max is a designer for the fairer start mission.

View profile