Why we are considering the impact of generative AI on the early-years sector

We’re publishing rough-and-ready updates on our project exploring generative AI and our mission work. We want to work in the open to make it easy to see what we’re doing and share your ideas to improve our work, so ideas in this piece are still under development. At this early stage of exploration, we thought it might be interesting to explore what makes the early-years sector distinct from others when it comes to applying generative AI.

In exploring the potential impacts of AI, we have decided first to focus on one of Nesta’s three missions. Over the last few weeks we’ve been speaking to colleagues in Nesta’s fairer start team and experts in the early-years sector to understand how generative AI might support their work to improve outcomes and reduce inequality in the first years of children’s lives. Of Nesta’s three missions, we felt it held the most potential for impact, particularly in generating new content to use with children, analysing written information and acting as an easily accessible source of advice.

Our interviews with colleagues and experts so far are surfacing lots of opportunities (which we’ll write more about soon).

How is the early-years sector different from others?

Because their brains are still forming, and their cognitive and emotional functions are still developing, decisions about children's care and learning can have consequences that shape the rest of their lives. They also cannot make informed decisions about their own education or give consent to participate in the same way that adults can. As a result, the success or failure of generative AI applications in early years could have far greater, and longer-lasting consequences than in other areas.

Technology companies are already creating products designed to be used directly by young children, but there is a legitimate debate to be had about whether the tech is safe for children to use at this stage. Generative algorithms can respond in unexpected ways and are trained using information from a wide range of sources that often haven’t thoroughly been vetted. We’ll be looking mostly at applications that can be used by carers, so that there is an opportunity for an adult to check any generated content and decide if it is appropriate.

Why should we experiment with generative AI in early years?

There is a significant shortage of workers in the early-years sector, which may be caused partly by low wages, high workload and immigration challenges, which the sector says has worsened since Brexit. On top of this, far more workers will be needed to fulfil the UK government’s promised increase in free childcare hours, with demand likely to increase gradually over the next few years. Workers are also having their time squeezed by an increasing administrative load and greater expectations from parents for reporting on their children’s progress.

Generative AI could help to streamline administrative tasks such as reporting to parents and checking funding eligibility, which could help workers spend more time with children and reduce the pressure of workforce shortages. We’ll explore applications that can help early years practitioners achieve this.

How can we apply generative AI?

Workers in the early-years sector have a wide range of levels of technical aptitude, with some people struggling to use computers at all, according to our interviewees. This means that any technological approaches to improving the way early years settings work will need to be very straightforward to use, with familiar forms of interaction.

Luckily, the conversational interface of most generative AI platforms lends itself to creating accessible prototypes, in contrast with other technologies which require extensive training. As part of this work, we’re building a handful of prototypes that will bring our ideas to life for people in the early-years sector. We’ll experiment with making our prototypes easy to access through apps that early years practitioners already use, such as WhatsApp.

Supporting curiosity – our first prototype?

In building our first prototype, we have decided to leverage the simple fact that generative AI is good at “making stuff up”. When working with young children, the simplest statement or question can be the starting point for a whole day of learning and fun. “I saw a snail in my garden!” can be the inspiration for games, reflection and physical movement.

The practice of responding to a child’s interests with questions and activities, also known as contingent talk, is vital in supporting language development. Children from a socially disadvantaged background hear less contingent talk, which can affect speech development and ability to learn.

Despite the importance of this skill, early-years practitioners receive little training about how to do contingent talk well, according to Liz Hodgman at the Local Government Association. We spoke to Liz about how generative AI tools such as ChatGPT could support early-years learning and it quickly became clear that responding to ideas and questions is one of the key challenges faced in these settings.

“I often hear from early-years practitioners who feel burnt out from the workload and like they’ve completely run out of ideas for activities. They’re leaving the sector because they’ve reached the end of their creativity and energy,” Liz explained.

Generative AI could support practitioners to come up with creative ideas for contingent talk and playful activities. These ideas could be presented in a fun way that allows practitioners to use their own skills to decide how to implement them in the early-years setting. It could, for example, present options as flashcards with images to represent each activity. It could also remember previous activities and create learning pathways for children, or even combine the interests of multiple children to keep them engaged.

There are a couple of risks in this approach that we’ll need to explore. First, users will need some training to cope with any unexpected results from the algorithm, such as hurtful speech or inappropriate content. There’s also a risk of completely incorrect information, which would be damaging for children’s learning. We'll also have to watch out for normative biases baked into the algorithm’s training data that could influence the results, such as gendered or racialised stereotypes. These risks underline the critical need for a competent ‘user in the loop’ – generative AI tools will still need well-trained carers to use them, but hopefully can ease their workload and increase job satisfaction.

We’ll follow up soon on how our work develops and what other use cases we will consider. Until then, get in touch if you’d like to chat about this work.

Author

Louis Stupple-Harris

Louis Stupple-Harris

Louis Stupple-Harris

Foresight Engagement Lead, Discovery Hub

Louis is the Foresight Engagement Lead within Nesta’s Discovery Hub, which aims to create a link between Nesta’s current portfolio and our pipeline of future work.

View profile