The right kind of AI in education

Artificial intelligence (AI) seems to be rarely out of the media – whether it’s talk about some extraordinary achievement (such as interpreting medical scans more effectively than humans) or the potential for a dystopian future (perhaps beginning with the automatic generation of fake news).

In reality, we are currently experiencing what has been called the “age of AI implementation” (Lee, 2018), with decades-old AI techniques being applied in a growing array of domains – such as finance, health, climate change, and education.

The application of AI in education – my particular interest – is beginning to grow exponentially, with many multi-million dollar funded start-ups selling their AI ‘solutions’ to schools and governments worldwide (Holmes et al., 2019; also see Nesta’s ‘Educ-AI-tion Rebooted’).

By far, the most common applications of AI in education are the student-facing so-called ‘intelligent tutoring systems’. Working on individual computers, students are presented with some information, a learning activity and possibly a quiz. Their responses determine the next set of information, activity and quiz that the system provides, with each student following a pathway that is thereby personalised to their individual strengths and weaknesses. However, although perhaps in a sense “efficient”, these systems might be criticised for adopting an instructionist spoon-fed pedagogy that minimises engagement with teachers and peers.

Other student-facing AI in education tools include dialogue-based tutoring systems, exploratory learning environments, virtual agents, learning network orchestrators, and automated writing evaluation (the last of which is currently being explored by Ofqual). ‘Educ-AI-tion Rebooted’ also identified some ‘system-facing’ AI in education tools (such as automated timetabling systems and AI-powered admission systems), and ‘teacher-facing’ AI, of which currently there are few examples (Holmes et al., 2019).

All of these developments beg two sets of questions. At a basic level, are such AI-driven technologies ‘effective’ (do they do what they say on the tin)? Currently there is little evidence one way or another – a gap that Nesta and the Department for Education’s (DfE) ‘EdTech Innovation Testbed’ is beginning to address.

However, perhaps more importantly: are the AI technologies being introduced in schools and other educational settings addressing the right educational tasks? Are they enhancing learning as an essentially human and social activity, or aiming to make learning ‘more efficient’? Are they designed to support, or to replace, teachers? Are they personalising learning pathways to prespecified learning content, mainly preparing students for exams, or supporting personalised learning outcomes, enabling students to achieve their individual aims and potential?

There is a danger that the predictions made by certain applications of AIEd could lock them into undesirable future educational, employment or life pathways.

Educ-AI-tion Rebooted

In short, are current applications the “right kind of AI” in education?

The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the “right” kind of AI with better social outcomes

Acemoglu and Restrepo, 2019

Social and emotional learning

It is these types of questions that several institutions worldwide are beginning to contemplate. For example, earlier this month, the ‘Mahatma Gandhi Institute for Education, Peace and Sustainable Development’ (MGIEP), UNESCO’s Asia Pacific Category 1 Research Institute, held a short conference on the use of AI to support social and emotional learning (SEL):

Research suggests that SEL not only promotes prosocial behaviour in learners, but it also positively impacts both academic performance and behavioral outcomes. The primary challenge is to rethink education systems to go beyond merely applying AI to reinforce existing pedagogical practices of the transmission-model of education. Instead we must explore transformative pedagogical approaches that apply AI to augment Digital Pedagogies to build both cognitive and emotional intelligence of learners

Future of Education, AI for Social and Emotional Learning, MGIEP, 2020

Social and emotional learning is widely considered to be essential for child development, and for the world’s sustainable development. That’s why Nesta’s ‘Future Ready Fund’ (which currently is supporting ten UK projects to develop social and emotional skills in 11 to 18-year-olds) is focusing on the non-cognitive skills specified in the Education Endowment Fund’s SPECTRUM framework:

  • Emotional intelligence (how to build good relationships and collaborate effectively);
  • Social and emotional competences (particularly in relation to collaboration);
  • Resilience and coping;
  • Perceptions of self (self-confidence and self-efficacy); and
  • Motivation, goal orientation and perseverance.

The MGIEP conference began with a live demonstration of the Institute’s flagship AI for SEL application, known as FramerSpace. This novel learning platform (in essence, a ‘learning network orchestrator’) has been designed to help educators encourage dialogue between students around the world on critical global issues, such as the lives of refugees, climate change, citizenship, identity, and sustainability. For each of these topics and more, MGIEP has collated a comprehensive set of resources, including videos, images, games and texts. The role of the AI is to translate and monitor the platform’s large numbers of student forum posts, to identify common themes and emotions, and to help the participants connect ideas and develop their SEL skills.

The remainder of MGIEP’s conference comprised three panel discussions, with contributors from the US, France, India and the UK. The panels considered the impact of AI on human cognition (which concluded that on a literal level AI’s impact on cognition is, at least currently, limited!), how AI might augment SEL, and the ethical and practical concerns around implementing AI for education.

In each of the discussions, there was broad agreement that we do need to work together to avoid AI technologies that are “fundamentally dubious” (Narayanan, 2019), specifically those that aim to predict social outcomes (such as criminal recidivism, job performance and learning). There was also considerable scepticism from the panellists that AI could contribute effectively to SEL – maybe this is one area in which AI should not involve itself at all? For example, it is not obvious how an AI-powered machine might enhance human relationships. However, perhaps the use of AI to support collaborative learning, helping to identify and make connections between thousands of student forum contributions, as exemplified in MGIEP’s FramerSpace, is a positive use of AI in SEL that might usefully be explored further.

In any case, there were calls from many of the panellists to challenge the hype surrounding AI’s capabilities (to identify what it genuinely can do, and what it probably cannot, even in the foreseeable future), and arguments for a robust focus on the ethics of AI in education (which goes beyond the necessary but insufficient focus on the ethics of data and algorithms to include the ethics of pedagogical practices). Finally, there was the suggestion that a global regulatory approach should be adopted, along the lines of the not-for-profit ICANN model of Internet regulation, which is managed by a multi-stakeholder international community of volunteers (an alternative approach, which has been explored by Nesta, might involve ‘anticipatory regulation’).

Whether educators and students welcome it or not, artificial intelligence is increasingly being applied in educational contexts, including for SEL. If we are to ensure that this wide and potentially intrusive use of AI genuinely leads to better social outcomes, it is critical that all strata of society (especially educators and students) engage with the AI scientists and engineers to ensure that AI focuses on what it does best (such as identifying patterns in large amounts of data) while supporting humans to do what we do best (centred on but not limited to human to human engagement). Having said that, this is probably just one of a range of complex issues that we ought to consider before using AI in education.

In other words, if we want to help students of all ages develop robust cognitive, social, and emotional skills, so that they are able to make effective and productive contributions to society and sustainable development, perhaps we need now to better identify and ensure the “right kind of AI” in education.

References

Acemoglu, D. and Restrepo, P. (2019) The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand, Cambridge, MA, National Bureau of Economic Research [Online]. DOI: 10.3386/w25682 (Accessed 19 February 2020).

Holmes, W., Bialik, M. and Fadel, C. (2019) Artificial Intelligence in Education. Promises and Implications for Teaching and Learning., Boston, MA, Center for Curriculum Redesign.

Lee, K.-F. (2018) AI Superpowers: China, Silicon Valley and the New World Order, Houghton Mifflin Harcourt Publishing Company.

Narayanan, A. (2019) ‘How to recognize AI snake oil’, MIT [Online]. Available at https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf.

Author

Wayne Holmes

Wayne Holmes

Wayne Holmes

Principal Researcher, Education

Wayne led on developing Nesta’s education research agenda and producing research on how technology can be used to reimagine teaching and learning.

View profile