The impact of school closures on the educational disadvantage gap is alarming.
Research from organisations including EEF, NFER and The Sutton Trust points to an increasing gap between disadvantaged pupils and other learners. While it is thanks to EdTech that we can even consider continued learning while schools are physically shut, the digital divide has led to markedly different experiences of remote learning depending on your family circumstances. Now, more than ever, our focus must be on making EdTech work for everyone.
Of course, for any progress to be made with EdTech, as a first priority there must be full accessibility for all students. The Government has rightly committed to this and it should be seen as an essential foundation for the current and future education systems. This blog sets out five ways technology can play a lead role in our efforts to reverse the widening disadvantage gap: experimentation, evidence, data, support for teachers, and assessment.
One of the great advantages of EdTech platforms is the relative ease with which small changes can be deployed and tested to drive continuous improvement in learning. For example, we conducted four rapid experiments with the HegartyMaths platform that showed how small changes can have outsized effects across millions of learners. In one experiment, tweaks to the prompts learners saw after incorrect responses led to increased use of video tutorials; scaled across the whole platform, the best performing prompt would have led to two million more videos being watched, and seven million more correct second-attempts. Experimentation also shone a light on what didn’t work: students who could build up ‘streaks’ - designed to reward continued learning - actually spent less time on the site and answered fewer questions correctly.
There is also a huge amount of natural experimentation underway with teachers trying different combinations of chat, video and EdTech products every day. In one small innovation, many teachers are now routinely recording their lessons. This makes it easier to share lessons, learn from peers, ask for feedback, and to incrementally improve. These practices of testing, experimentation and continuous reflection are things we've been trying to build into the system for years, but we lack means to formalise learning from them or explore the new opportunities they can unlock.
With greater support from the Department for Education (DfE), these could be translated into pilots which genuinely change how education is delivered. For instance, the new practice of recording lessons could help improve job sharing between teachers. A small pilot to explore how this might work could have huge benefits in addressing staff shortages if scaled up across the system.
The EdTech market is full of promise but lacking in evidence. This matters because many schools remain in the dark about the impact of EdTech and the merits of different products: what works, for who, and why?
So what would a systematic approach to evidence look like? At its core should be funding (from the government) for high-quality independent evaluation of technology tools (on a similar scale to the EEF). But how and when technology is used is just as important as which technology is used. So this research effort should also consider varying implementation, training, professional development, and school contexts in which technology is used. The end results should be collated in a way that makes it easy for school audiences to use the insights (building on the work of organisations like EdTech Impact).
The rise of EdTech means we have more data on learners and learning than ever before and yet, we have no systematic way to use it to inform practice at classroom level (e.g. how are different pupils engaging with classwork?), the school level (e.g. which behaviour policies are more/less effective?), or at a national level (e.g. are particular types of school benefiting more or less from a technology tool?).
First, there is a question of whether individual providers are capturing useful data. For example, privacy concerns and time constraints often mean companies do not collect data on pupil premium or free-school-meals status, indicators that would be helpful for targeting support where it is needed most. There is a need for government to produce clearer standards and guidance to help EdTech companies safely gather such data. Producing, for example, guidance on effective and ethical data anonymisation approaches could go a long way. In addition, specific government funding to overhaul current data architectures could be provided to EdTech organisations that cannot afford to retrofit their platforms.
Second, with the right data being gathered, there would be significant benefit in a national data-sharing hub for EdTech providers. This would allow system-level improvements to be driven by insights from user-level data gathered across millions of students every day. It could also help shore up the quality of the EdTech market by showing which companies use their data to improve their product. A smaller-scale project we carried out with School Dash and four EdTech platforms gives an idea of what this could look like, matching platform usage data with publicly available data about school characteristics to identify trends. With benchmarking and national/regional data, and links to historical attainment data, insights can be made actionable when considering product improvements.
The next round of the National Tutoring Programme (NTP) is also an opportunity to create a better collective data infrastructure for provider organisations to facilitate sharing and analysis. Learning the lessons from daily EdTech data use suggests that NTP organisations could find a similar benefit. Investment in a better data structure would mean we can calibrate and refine tutoring offers based on a number of variables, such as where people live, what parts of the curriculum they have missed out on, or which areas students seem to be doing poorly on.
As schools return to (near) normal in the future we should be smart about which aspects of education-by-EdTech we keep.
The quality of teaching is the most powerful lever within schools for improving academic outcomes, particularly for the most disadvantaged students. Technology approaches that can provide high-quality teaching resources, help teachers explain concepts clearly, automate distracting bureaucratic tasks, and provide low-effort ways to set assessments and provide feedback can improve teaching quality.
The ‘mutant algorithms’ exams fiasco in England last summer has not helped make the case for technology in our assessment system. But that should not distract us from the fact this is a rare opportunity to consider how we improve our deeply flawed assessment system.
Currently, our accountability system in England relies on Ofsted ratings and exam results. This leads to negative and unintended consequences - from teaching to the test and a narrowing curriculum in many schools, to prioritising whole school results over the needs of individual students and removing or excluding poorly-performing pupils entirely. Imagine instead an accountability system where one-off examinations are complimented by more frequent, lower-stakes assessments. Beyond offsetting some of the existing flaws, phased assessment would give more young people a fighting chance at improving, since results can be interpreted and acted on in real time by teachers themselves.
The pandemic has brought EdTech from the sidelines to the centre of education practice and debate. Looking forward to a post-pandemic world, it seems very unlikely we'll go back to the ‘old’ normal. EdTech is here to stay. The most important lesson we should learn from the last year is that not all children experience EdTech equally. But that does not mean we don’t have grounds for optimism. This blog has tried to show that there are things we can do to combat the digital divide and its risks to more disadvantaged pupils. Some more radical (e.g. rethinking assessment), some more practical (e.g. collecting better data), but all need to be considered now.
 Research estimates that while on average students lost 3 months of learning between March-September 2020, over half of pupils from the most disadvantaged schools lost 4 months (compared with 15 per cent in the most affluent schools), The EEF estimates that school closures will have, in all likelihood, reversed any gains made in closing the disadvantage gap in primary school.