We spent last week at the biggest education trade show in the world, BETT. In this final blog in our BETT series, we ask what evidence we should be expecting of edtech companies.
Every year at BETT I play a little game: I go up to stalls at random and ask how they know whether their product actually makes a difference to what or how well children learn. In the past, I have been met with blank stares or told that ‘children just love our product’. This year was different. I was impressed by how many companies could talk about how existing research had informed the design of their product and how they had been taking baseline measures and looking for changes after children had used their product. Some even talked about plans for more rigorous research.
Helped by initiatives like EDUCATE and a more savvy, cash-strapped customer, evidence in edtech is getting some traction. It may not be quite mainstream yet, other visitors to BETT were not impressed by the evidence on offer, but from what I saw, things are definitely changing (or maybe I have just learned to avoid the robots).
For an evidence geek like me, this is great news. Simply recognising that evidence should be part of the design and decision-making process is a great step forward
But behind the scenes, debate rages on the quality of the evidence being produced and used by edtech companies. The Education Endowment Foundation has set a high standard for rigorous, independent research into ‘what works’ in education. Next to this benchmark, much of the effectiveness research used by edtech companies is methodologically flawed and biased.
The debate around evidence in education technology ping pongs between two uncomfortable truths:
Improving learning is hard and it is difficult to be sure that an approach ‘works’ without very rigorous research.
It is just practically impossible to expect the thousands of edtech products out there to conduct rigorous impact evaluations of academic quality.
So if we want to make good, evidence-based decisions towards including technology in the process of teaching and learning, how do we move forward? What should be reasonably expected of an edtech business that takes evidence seriously? How can we support this increased enthusiasm for evidence and raise expectations without asking businesses to do the impossible?
The first step is being clear on what different types of evidence tell us and what they don’t.
Rigorous experimental quantitative research, such as randomised controlled trials, are powerful tools that can indicate whether a particular approach has ‘worked’
If well-designed and accompanied by high quality qualitative research, trials like this can also reveal why an approach has worked or not, who it works for and how it works. For example, a trial of the Parent Engagement Project that uses text messages to get parents more involved in their children’s schooling, not only demonstrated positive results but also helped to understand where, why and how this sort of intervention is most useful.
By undertaking several experiments over time an evidence base builds up that, at its best, can reveal ‘design principles’ (or core components). For example analysis of the literature on computer-assisted learning shows that it is most effective when used as an in-class tool or as mandatory homework support. These design principles can help both developers and purchasers of edtech pursue strategies that are more likely to have an impact.
This is an approach that is in contrast to a more common model whereby a specific intervention from a specific organisation is tested and, if a positive effect is found, it is kite-marked as ‘effective’ and the organisation is encouraged to scale up. There are a few reasons why, for edtech in particular, this approach alone will not solve our evidence problem:
Design principles can come not just from experimental research but also from the vast pedagogical literature that tells us how children learn. The EDUCATE programme, led by UCL’s Institute of Education in partnership with Nesta, BESA and F6S, is helping cohorts of edtech startups to use the existing evidence as they test and adapt their products, as well as helping them to collect new evidence.
Rapid cycle testing and other lighter touch evaluation approaches are probably the most common and immediately practical form of evidence used during product development by edtech companies. New ideas are tested on small samples in short time frames to inform the next stage of design. The results from these tests do not tell us that impact has occurred, but that is not their purpose. Their purpose is to improve each small step of a design process. If these tests are done thoughtfully, involving educators, then they increase the chance that the final design will indeed have impact.
I come last to probably the most common form of evidence (and probably considered the least rigorous) in edtech: teacher feedback.
As with rapid cycle testing, teacher feedback cannot be considered evidence of impact and should not be used as if it is, but it does have two very important uses
The first is to help companies improve their products based on feedback, the other is to help teachers navigate the incredibly confusing array of products on offer.
Edtech, more than most education projects, risks not working because it simply isn’t used or is not used well. Kit stays wrapped in original boxes in cupboards or used only once, teachers don’t receive adequate training, no one knows what to do when the kit goes wrong. Teacher feedback allows us to determine whether that first step towards impact can be taken. Have others in your situation found a way to integrate a particular technology into their teaching in a way that works for them? A ‘yes’ does not guarantee impact but a ‘no’ pretty much rules it out.
So to return to the question I asked at the beginning of the blog, what should we be expecting from edtech companies who claim to be serious about evidence? We propose five expectations:
From our experience, there are many many edtech companies that aspire to or meet these expectations. They engage enthusiastically with programmes like EDUCATE and submit applications to the Education Endowment Foundation. But more support is needed. Research needs to be made digestible and accessible to companies, just as the Education Endowment Foundation and EDUCATE make research accessible to teachers.
A clear research agenda that shows where the consensus is and where the biggest gaps are in the evidence is needed. The Education Endowment Foundation plans to update its review of the literature in edtech this year, so this is a promising starting point.
The evidence agenda has finally reached edtech. This is a critical time when attitudes and expectations can be shaped. Please get in touch with your ideas and suggestions for supporting an edtech sector that delivers real impact for students based on evidence.