About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Some innovations may require more complex evaluations, either because of the nature of the work, the context of the field it operates in, or as it scales to work with many more people.

A programme in this category may want to build on existing evidence to fill any gaps, or may be looking to use more complex methodologies to draw out richer detail or attempt to objectively measure impact. Or having looked at impact before, it may be seeking to understand which elements of its process are or aren't working in more depth. More complex evaluations tend to take longer and be more expensive. Therefore a good evaluation at this stage is one that has a really well thought-out design that builds on existing evidence, takes account of previous challenges and has a clear plan for overcoming potential future issues.

Top tips when evaluating larger or more complex programmes

If you’re aiming to commission a more complex evaluation that measures impact by establishing a counterfactual, then you need to ensure that your evaluation team has done similar evaluations in the past. The skills needed to carry out a randomised control trial (where participants are randomly assigned to the intervention or a control group) or quasi experimental design (where a comparison group is established in other ways and there is no randomisation), are very specific and these sorts of evaluations should only be undertaken by social scientists with considerable experience who are familiar not only with the necessary techniques but also the potential pitfalls and how best to avoid them.

All evaluations require a degree of project management by the staff team commissioning them but this need is increased as the evaluation size and complexity grows. If you’re planning to commission a large evaluation it is worth considering formally allocating part of a staff member’s role to managing it. For more complex evaluations it may also be worth considering convening a small advisory group ideally including people who have research and evaluation skills. This can not only help ensure that you get the most out of the evaluation, but can play a role in linking the data from your evaluation to the wider evidence base. The evaluation of ‘Kinship Connected’ for Grandparents Plus illustrates just how much project management and engagement from the grantee can be required. Early on in the process, the evaluator led workshop sessions with project workers, who were ultimately the ones who would be gathering the evaluation data. These workshops focused on the approach and the questions that would be used and how. The project team at Grandparents Plus also invested in a new Customer Relationship Management tool and employed someone to develop this and monitor the data to ensure that the evaluator had what was needed. They also set up regular meetings with their evaluator and the Nesta programme manager that ran throughout the programme to ensure that the evaluation remained on track.

This applies to all types of evaluation but is particularly salient when a more complex evaluation has been commissioned. If more complex evaluation methods have been used, their write-up may require more detailed explanation which can be off-putting to some audiences. So it can be helpful to decide early on who the evaluation outputs will be aimed at and to consider the format they should take. In some cases it may make sense to have more than one output aimed at different audiences, and deciding this upfront will enable the evaluation team to focus on how the final data will be presented. For the evaluation of In2ScienceUK for example, the evaluation team produced a detailed impact report. The In2ScienceUK team then also produced their own summary of the report and made both reports available on their impact page which includes a short infographic video to summarise their impact over the last year.

Challenges and successes

1. More complex evaluations can mean reporting can be technical and harder to understand for some audiences

complex evaluations can mean reporting can be technical and harder to understand

Challenge The methodologies that are used for more complex or detailed evaluations can be more technical and demand more social science language when written up. This is often necessary to ensure that the various elements of the evaluation are explained and written up in a thorough and transparent way, however it can be very off-putting for readers, even those who have a good knowledge of social research.

Success: The evaluation of the Grandmentors programme included some more complex methodologies and resulted in a long and relatively technical report. To get around this, the evaluators also produced a clear standalone executive summary which covered the key elements of the evaluation and its key findings to make them more easily digestible.

2. Inconsistencies between sites

Inconsistency between sites

Challenge Larger programmes may have scaled to operate in more than one location. While this can present a lot of opportunities for evaluation, it can also be challenging if there are inconsistencies between locations as that can make direct comparison difficult.

Success: The Empowering Places, Empowering Communities (EPEC) programme from South London and Maudsley NHS Trust had previously had a number of impact evaluations and since it was rolling out multiple local hubs, it chose to commission a process evaluation in order to explore how the variations across the teams affected their impact.

3. Creating a useful control group

Innovation-methods-innovation-mapp.2e16d0ba.fill-600x320.png

Challenge In order to be able to attribute impact to a project, it is necessary to compare the results of that programme to a counterfactual to understand what would have happened if the innovation had not taken place. This can be done through the creation of a control group but it can be hugely challenging especially if evaluators are trying to determine the impact of a programme that has already taken place.

Success: For the evaluation of the In2ScienceUK project, the evaluation team needed to explore the impact of the project on two different cohorts of young people from 2018 and 2019. For the 2019 group, they were able to collect data from a comparison group composed of project applicants who were interviewed but were not selected to participate due to a lack of sufficient placement or scheduling conflicts. However, in 2018 the project did not collect data from a comparison group at either the baseline or follow-up. Therefore, the staff team created an artificial comparison group for that cohort by surveying those who applied but did not take part after the programme had taken place. This is a complex process that has a number of limitations but a useful way of understanding potential impact.

“The external evaluation was tremendously important for the charity. It validated our impact measurements, enabled us to compare the impact of our beneficiaries against a control group and spend substantial time really reflecting on impact and how we improve this in our organisation.”

The In2ScienceUK team

Larger or more complex case study findings

  • The EPEC programme from South London and Maudsley NHS Trust is a parent-led parenting programme designed to offer parents support to improve a range of outcomes for children and families. The EPEC team undertook both an internal programme review of the national scaling programme and an external process evaluation building on existing impact evaluations. The internal review found that the scaling programme was a robust and successful test of the capability to deliver EPEC at scale and that Being a Parent courses were consistently highly effective, with clear impact on child, parents and family outcomes. The external process evaluation identified how well the variations across the teams worked and drew out some positive findings in relation to team effectiveness.
  • The In2ScienceUK programme aims to tackle the issue of fewer young people from the lowest income backgrounds progressing to university, having a STEM career or becoming economically stable than more affluent groups. It does this by leveraging the expertise and passion of local scientists, engineers, and technology and maths professionals through work placements and mentoring, workshops and skills days. The evaluation found that participation in the In2scienceUK programme primarily increased students’ confidence in their abilities, improved their understanding of career routes into STEM and provided them with contacts that could offer them advice in the university application process.
  • The Grandmentors programme from Volunteering Matters delivers intergenerational mentoring projects for young people transitioning from care. Its evaluation found that young people who participate in the programme see positive changes in their lives in terms of improved education, employment and training (EET) outcomes.
  • The Grandparents Plus programme, Kinship Connected, provides support to kinship carers. The evaluation found that Kinship Connected made a range of positive impacts; isolation was reduced and concerns regarding children were reduced as many carers gained an increased understanding of their children’s behaviour. With the impact of peer-to-peer support, kinship carers felt a sense of connectedness with others and from this, a real resilience to cope, and pride in their caring role. The cost-benefit analysis also identified that for every £1 invested in the programme, £1.20 of benefits was estimated to be generated. This equates to a 20% rate of return.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital (NPC)’s theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Authors

Sarah Mcloughlin

Sarah Mcloughlin

Sarah Mcloughlin

Senior Programme Manager

Sarah was a Senior Programme Manager.

View profile
Carrie Deacon

Carrie Deacon

Carrie Deacon

Director of Government and Community Innovation

Carrie was Director of Government and Community Innovation at Nesta, leading our work on social action and people-powered public services.

View profile
Annette Holman

Annette Holman

Annette Holman

Programme Manager, Government Innovation Team

Annette worked in the Government Innovation Team at Nesta focusing on social action priorities, specifically on the Connected Communites and Second Half Fund.

View profile

Naomi Jones

Naomi is an independent social research consultant who works with organisations to support their delivery of research and evaluation to help them to use evidence effectively for change.