How might the early-years sector use generative AI?

What are the different ways that generative AI could be used in the early-years sector? As part of Nesta’s project on how generative AI will affect our missions and work, we’ve explored this question and identified a few use cases that we are now working on in greater detail.

Emerging potential for generative AI in early-years

Generative AI has the potential to have a wide-ranging impact on the early-years education sector. At its best, generative AI might make education and educational content more personalised and engaging, while also easing the workload for caregivers and educators. At the same time, we need to be mindful of the risks and challenges associated with using this technology.

From our desk research and insights garnered from domain experts, including early-years professionals, educational app developers, and technology specialists, we've identified multiple potential use cases categorised into four major domains: teaching, operations, family support, and content. The use cases are summarised in the diagram below.

Our focus remains on large language models (LLMs), which can generate high quality text when given a prompt by the user. However, we occasionally also consider algorithms that can create other types of content, such as audio, images and even video.

The full list of use cases can be found here. In the following, we describe each of the four major domains in turn.

Teaching: Creative tools for supporting teaching

LLMs could become useful teaching assistants. In our interviews with early-years practitioners, we heard about the challenge of remaining creative and energised in the face of a substantial workload. LLMs could help to reduce some of the workload and provide creative support for practitioners.

These models, in principle, have the ability to produce lesson plans, generate activity ideas, design games or weave stories – all aligned to an individual child's interests and developmental level. Generative AI could support, for example, the practice of responding to a child’s interests with questions and activities (also known as contingent talk) which is vital in supporting language development. Some early-years educators are already exploring such use cases.

Early-years experts who we interviewed also raised important questions about the challenges of ensuring that outputs generated by LLMs are safe and have sufficient educational quality. Generative AI models are trained on vast amounts of data from the internet, which means they can sometimes reflect and perpetuate biases present in those datasets. There’s also a risk of the LLMs ‘hallucinating’ (ie, producing incorrect information), which might have a negative effect on children’s learning.

In our project, we are exploring approaches such as retrieval-augmented generation to ground LLMs in best practice and official curricula such as the Development Matters guidance for England to reduce hallucinations. Nonetheless, using teaching tools based on generative AI will require a competent ‘practitioner in the loop’ who is capable of evaluating the quality of the generated output.

Operations: Easing the administrative burden

Streamlining administrative tasks is another promise of generative AI. Early-years educators have noted the ‘daunting’ amount of paperwork between lesson planning, teacher-caregiver communication and record-keeping. A survey conducted by the Professional Association for Childcare and Early Years in 2016 highlighted that the burden of paperwork was the biggest issue for a quarter of early-years professionals.

Emerging research suggests that generative AI and LLMs in particular might be able to ease this burden. For example, a randomised control trial from the Massachusetts Institute of Technology has shown that tools based on LLMs such as ChatGPT can support people by speeding up a wide range of business writing tasks by 37%. Moreover, in the same study, the use of a large language model not only optimised operational efficiency but also increased the quality of the written outputs and improved the participants' job satisfaction. Therefore, it’s foreseeable that LLMs could support early-years practitioners in writing parent communication messages, creating newsletters, generating enrollment forms, drafting meeting agendas, parent surveys and more.

With the increasing usage of digital platforms for managing early-years settings, developers of these platforms will likely also include generative AI in their offer, thus paving the way for a widespread adoption of this new technology. For example, the Sidekick product developed by the company Famly, the biggest early-years software provider in the UK, is an AI-powered assistant for writing newsfeed posts, observations, assessments and two-year checks.

Family support: A caregiving co-pilot?

Generative AI could also bolster familial involvement in the educational journey. Similarly to the use cases for teaching described above, creative support such as coming up with personalised activity ideas could also be useful for parents, and help them to identify new ways to engage their child.

There is also emerging anecdotal evidence of some parents using tools like ChatGPT to support them in household tasks such as planning meals or scheduling daily activities. The private sector is developing tools like the instant messaging-based Milo, which uses the GPT-4 large language model and aims to streamline household tasks. If such tools can significantly reduce caregivers’ mental load and free up time, they might indirectly contribute to a better home learning environment.

More generally, supporting parents and family members with evidence-based advice through chatbots is an active area of research and experimentation. For example, researchers at the University of Oxford in collaboration with other organisations are developing a chatbot to promote playful parenting and prevent violence against children. Researchers based in Argentina and the US have trialed a micro-intervention using a chatbot to teach parents how to praise their children. The popularity of ChatGPT, Claude and other general-purpose chatbots based on LLMs suggests that generative AI could potentially enhance the development and user experience of parenting chatbots as well. LLMs could make it easier and more cost-effective to adapt evidence-based caregiving advice to different audiences in various contexts, as well as facilitate a more natural, conversational style of texting. The challenge of grounding the large language model in trusted information and reducing hallucinations will become critical in this context.

Content: Facilitating the production of engaging and interactive content

Generative AI can speed up the creation of engaging and individualised image, video and audio content, reducing the associated costs and effort. While the use of generative AI for creative applications has raised criticism due to concerns around the quality of the outputs, copyright and impact on creative professionals, we've already witnessed the initial applications of generative AI; for example, crafting children’s books, creating personalised stories with your child as the main character, and easily generating voice narrations.

As generative AI simplifies software development and creation of video games, we also foresee an upsurge in innovative, interactive educational tools. Nesta’s research in 2022 showed that there are already at least 900 educational and entertainment apps for young children. With advances in speech recognition for children, such apps will become more interactive and engaging. While this might improve the quality of our toddlers’ screen time, increasing content will make it challenging to evaluate these technological innovations and ensure their positive impact on learning outcomes.

Next steps: Learning by doing and making prototypes

After understanding the landscape of potential use cases, we are now addressing them by working on four prototypes using LLMs. These prototypes are motivated by our interviews with early-years practitioners and as such are more concerned with the teaching and family support use cases. They could help educators and caregivers provide a more personalised and creative support to young children.

Explain-Like-I’m-3: a web app for early-years educators and caregivers to help come up with age-appropriate explanations of any complex concept to children (eg, “Why is the sky blue?”). This app could be helpful with contingent talk, and is the simplest of our prototypes, as it only requires crafting a large language model prompt and creating a simple user interface.

Personalised activity generator: a web app for early-years educators and caregivers to generate ad-hoc activity plans personalised to the child’s interests and age group, while being grounded in best practice of early-years learning. Here, we are leveraging the official government guidance for England as a trusted knowledge base that the large language model can draw upon.

Activity recommendation engine: a web app for early-years teachers that could ‘read’ an early-years assessment form and generate or suggest age-appropriate activities based on the assessment. The recommendation engine could primarily draw upon an already established collection of early-years activities such as the BBC Tiny Happy People website. In this way, observations made in the assessment could be made actionable more easily. The main challenge for this use case lies in accessing and working with sensitive data about children, which is why our main focus here will be on the technical feasibility of surfacing appropriate recommendations rather than working with actual data about children.

Early-years chatbot: finally, we will consider a more complex and speculative prototype for caregivers that can provide wide-ranging information and advice about education and parenting, personalised to the child’s age, interests and background. The complexity lies in selecting the appropriate knowledge base for grounding the large language model, considering the wide range of topics that are relevant to caregivers, and dealing with the risk of hallucinations and misinformation.

Our work on these prototype ideas is ongoing: we occasionally publish details about the technical aspects of this work, and we’ll summarise our learnings from prototyping later this autumn.

The main goal of this exploration is to appreciate the potential of generative AI to support early-years settings. By the end of this project, we will be able to showcase our prototypes and discuss their limitations, with the hope that this can ignite further interest and explorations in this area.

We will also be interested in opportunities to partner with public or private sector organisations to test and scale our prototypes, so please get in touch.

Author

Karlis Kanders

Karlis Kanders

Karlis Kanders

Senior Data Foresight Lead, Discovery Hub

Karlis is a Senior Data Foresight Lead working in Nesta’s Discovery team.

View profile
Louis Stupple-Harris

Louis Stupple-Harris

Louis Stupple-Harris

Foresight Engagement Lead, Discovery Hub

Louis is the Foresight Engagement Lead within Nesta’s Discovery Hub, which aims to create a link between Nesta’s current portfolio and our pipeline of future work.

View profile