Addressing the 'evaluation deficit' in schools: part one

Helping teachers and schools untangle what works - and what might not - in the messy and complicated school environment is tough.

Identifying reliable conclusions from the heaps of data schools generate is not easy. And when we start to look at how students develop wider skills and competencies (like resilience or social skills, which some of our Future Ready Fund grantees focus on), rather than exam results, it becomes a whole lot more complicated.

We’ve supported ImpactEd, a digital platform that aims to make monitoring and evaluation much easier for schools - and help teachers understand which interventions are making a difference, and which might not be the best use of time and money.

In this two-part blog series, ImpactEd’s Managing Director, Owen Carter, explores the ‘evaluation deficit’ challenge in schools - and what some of the solutions might be.

The 'evaluation deficit'

In the UK education system, we have a problem with action – there’s too much of it, poorly applied.

Under pressure to make a difference and close stubborn attainment gaps, schools and charities do more and more: after school clubs, one-to-one tutoring, curriculum boosters and more. The average secondary school teacher reports spending more than 5 hours a week on pupil counselling, supervision and tuition.

As Becky Allen points out, ‘short-term, interventionist behaviours’ are unlikely to make a difference in the long-run. And increasing evidence is that it doesn’t: ImpactEd’s own small-scale research suggested that only three percent of school leaders were confident in their ability to evaluate the impact of the work they were doing.

This evaluation deficit is particularly acute when it comes to non-academic outcomes. The Department for Education’s advice on activities teachers and parents should promote was widely criticised for presenting a simplistic view of things children do already – climb trees, go on walks, see the sun set. But it does reflect some real issues: only 1 in 5 pupils say the school curriculum helps them ‘a lot’ with developing life skills such as confidence, motivation and resilience according to the Sutton Trust. Simultaneously, there is significant research that these ‘non-cognitive skills’ are particularly important for helping to support educational achievement for pupils from lower-income backgrounds.

Why measurement matters

At ImpactEd our approach has generally been that in education we’re better off doing less, but spending more time thinking about the difference we are making.

This might seem to run counter to current trends in some ways. There’s a general perception in the education system that we spend too much time thinking about measurement. Schools have results published in league tables, senior leaders spend days poring over Progress and Attainment 8 figures, and Ofsted themselves have recognised that there has historically been an over-preoccupation with data and outcomes.

Much of this criticism is fair. Many of the data practices in schools are neither valid or reliable and summative assessment is conducted far too frequently in many school settings. We want teachers to spend their time thinking about how they can best teach the children in their care, not doing administrative tasks.

On the other hand, I would argue, there are two main blind spots in the use of data in the current system:

  • The use of low-stakes assessment to guide ongoing action. There are huge benefits, for examples, to informal quizzing to identify obvious gaps in knowledge and adapt instruction accordingly. And testing itself can have powerful benefits for learning.
  • The analysis of existing data in new ways to better assess what is working. We spend too much time in generating data and not enough drawing insights and analysis from it. Let’s say a school conducts standardised assessments twice a year. If we store it in a way that is consistent and easy to analyse, we can use it for multiple purposes: understanding individual-level outcomes and providing support; evaluating specific initiatives or interventions; identifying trends; comparing relative performance and so on.

To paraphrase Francis Bacon, assessment is a wonderful servant but a terrible master. Done well, good measurement can support meaningful school improvement and ultimately lead to better outcomes for children. Done badly, it can do the opposite.

Why measurement matters for skills

Where this argument gets really complex is in the domain of skills that are typically considered outside the scope of the academic curriculum. Nesta has invested heavily in building the evidence base in this area, for instance by projecting employer demand for interpersonal skills and complex problem solving, or funding projects developing social and emotional skills and resilience.

In school settings, however, measurement and evaluation of some of these outcomes is typically confined to anecdotal evidence or pupil voice activities. Resources like the EEF’s SPECTRUM database have been useful in drawing attention to some of the tools that exist in this area, but remain comparatively underused compared to their Teaching and Learning Toolkit, for instance.

Good quality measures do exist. For example, Angela Duckworth and Martin Seligman used multiple measures of self-control with a group of Year 9 students, including pupil, teacher and parent questionnaires. Aggregating these measures together proved a significantly better predictor of end of year academic outcomes than IQ tests, a standard measure of cognitive ability. One commonly used pupil self-report measure known as the Big Five Inventory has proved to be a highly accurate predictor of outcomes such as lifetime earnings.

But measurement in this area typically comes with a number of limitations. High-quality skills measures are still very much evolving, and it is unlikely they will ever be able to supply the same objectivity as standardised tests of academic achievement. Some psychological domains – e.g. well-being, self-efficacy, intrinsic motivation – are likely to be significantly easier to measure than more abstract ideas like creativity and problem solving, which doesn’t necessarily mean the latter are less important.

In addition, precisely because most skills measures are questionnaire-based, they are easy to manipulate – if you want to get a good score on them, it’s fairly easy to work out what you should say. As such, they generally can’t be used for high-stakes assessment and are unsuitable for assessing teacher performance.

All the better! If we accept that measurement in this area is likely to be primarily formative rather than summative, that it could be used to look at progression and impact, but not in a high-stakes, high-accountability climate, this is probably quite helpful.

Simultaneously, there are a number of things that those of us involved in research and evaluation in this space can be doing more of. For suggestions about how we can do this better, read on in Part Two.

Author

Owen Carter

Owen is Managing Director of ImpactEd and leads their development. He’s a winner of the Teach First Innovation Award and part of PwC’s Tomorrow’s Business Leaders programme.

Jed Cinnamon

Jed Cinnamon

Jed Cinnamon

Senior Programme Manager, A Fairer Start

Jed was a senior programme manager in the fairer start mission at Nesta.

View profile