Innovations can be considered to be scaling when they are working to reach more people and achieving greater impact.

As an innovation grows and works with more people, there is a greater imperative for leaders of an innovation to know that it is working and is making the biggest difference possible to the aims of the organisation. It is essential that we take time to know that the work is not, accidentally, doing harm, in spite of our best intentions.

A good evaluation for a larger or scaling programme is one that builds squarely on previous evidence and research, has clear objectives and research questions and is transparent about what is and isn’t possible to achieve.

Top tips from our funds when evaluating scaling programmes

It’s worth taking some time to look at what evidence already exists within your sector or in an adjacent field. Seek out the lessons that any similar scaling projects have learned and look closely at any data you have already collected. Consider what it tells you and where the gaps are as well as how you might collect any evidence (including monitoring data) better. The evaluation of Christians Against Poverty’s Life Skills programme explicitly included a review of research that had taken place previously on the monetary and social impact of Christians Against Poverty’s wider services. This enabled the evaluators both to better understand the programme and to build on the existing research to ensure that the evaluation addressed remaining gaps.

If you’re scaling your programme, then you may well want to explore how your programme is operating, how well beneficiaries are being reached and the extent to which the actual delivery is aligning with what was planned. This would suggest that a process evaluation would be most useful. However, you may also want to know about the impact of your programme and the extent to which it has achieved its desired outcomes. While it is possible to include elements of process and impact in an evaluation this will usually involve compromises to the depth or quality of the evidence that can be collected, so it’s worth taking time to be really clear about your key priorities and keep them focused. Transforming Lives for Good’s (TLG) Early Intervention programme is designed to improve the behaviour of children who are struggling at school in order to raise attainment and reduce the risk of truancy and exclusion. The staff and evaluation teams decided to focus on impact for the evaluation. Because the research relied heavily on responses from parents and children there were challenges with questionnaires not being completed and missing data. If the evaluation team had focused on process too this may well have exacerbated the challenges that they faced in collecting data from parents and potentially compromised the quality of the impact data.

The gold standard for impact evaluation usually aims to attribute change to an intervention or programme by establishing some kind of control or counterfactual to enable the evaluators to understand what would have happened if the programme had not existed. However, evaluations that include a control tend to be very complex and much more expensive so it’s important to be realistic about what is possible. It is still possible to understand impact without a control by taking a more realist approach and exploring the extent to which people involved with a programme perceive and experience there to be impacts rather than aiming to measure the impacts more objectively. The evaluation of St Joseph Hospice’s ‘Compassionate Neighbours’ where local people are trained and supported in their efforts to connect with people in their neighbourhood at risk of loneliness and isolation particularly those with a life limiting illness, focused on exploring the experiences of both the Compassionate Neighbours and the community involved in the programme. The team commissioned this evaluation to build on previous evaluations that had been done to enhance their insights into the programme, such as a PhD study and a national evaluation of similar programmes. While taking this approach meant that they could not objectively attribute impact to the programme, the team was able to explore the different types of value that the programme created and generate useful evidence about the perceived impact.

To develop and implement a good evaluation, think about the time and resources it will require. Even the best evaluators will need someone from your programme to manage them and make sure that the evaluation is going the way that you planned. And a lot of learning and evaluation activities will require staff teams to get involved - to track progress, to hand over existing data, to review research materials and other outputs or to contact beneficiaries or stakeholders to ask them to take part in the research. Since the internal resources required can be considerable, Nesta sometimes provides funding to pay for grantees to have internal roles to manage learning and evaluation activities so it’s always worth having that conversation if you’re concerned about the internal resources that an evaluation will require.

It can be hard to convince staff and volunteers of the benefits of evaluation, particularly as some are not evident until later in the process when data has been collected.

Challenges and successes

Some of the challenges that our scaling innovations faced are outlined below along with examples of their approaches to mitigate them:

1. Accessing participants

Innovation-Methods-impact-investment.png

Challenge Programmes of all types can struggle to access the people who are involved with it to research their views. However, this challenge is magnified when a programme is working with a wide range of people. There are many ethical considerations to work through and researchers are often reliant on ‘gatekeepers’ to put them in touch with potential participants.

Success The evaluation team for Compassionate Neighbours sought the views of local community members receiving hospice care who had been matched with a Compassionate Neighbour. They found that gaining access to and collecting information from community members was difficult as some hospices were reluctant to refer community members or their carers to be interviewed. The evaluation team got around this to some degree by using peer evaluators (volunteers, many of whom were Compassionate Neighbours) who helped to access and collect data from community members and support data entry across the hospices.

2. Engaging staff and volunteers with the evaluation

Accelerator programmes illustration - innovation methods

Challenge It can be hard to convince staff and volunteers of the benefits of evaluation, particularly as some are not evident until later in the process when data has been collected. Staff and volunteers often have a fundamental role in the data collection process and since this sits on top of other duties to deliver the programme it can be hard for evaluation activities to be given priority.

Success Aesop tackled this challenge head on during the delivery of Dance to Health by ensuring that monitoring was built into the programme from the beginning. In its recruitment of dance artists to lead the groups, Aesop ensured that the responsibility of data collection was included in their job description. The evaluators had set out clear questions they needed to know from the participants at the start of the programme so these were asked at the beginning of each group session before the dancing began, so it became the norm for everyone involved.

3. Low response rates among participants

Challenge prizes illustration - innovation methods

Challenge In order to have robust and reliable data in evaluation it is important to try and get a good response to surveys and requests for other research interaction. However, this can be very challenging to achieve especially when people are facing challenges or feeling over-researched.

Success The team evaluating TLG's Early Intervention programme which was designed to improve the behaviour of children who are struggling at school, struggled with getting survey responses from parents, which is a common problem. While they were not able to solve this challenge directly, they ensured that they were very clear in the report about the impact that the missing data may have had on the evaluation results and clearly identified it as a research gap that should be a key focus of any future research of the programme.

The findings in relation to children’s subjective wellbeing and happiness suggested a small but significant trend towards increased wellbeing at the end of the programme.

Scaling innovations case study findings

  • The evaluation of TLG’s Early Intervention programme, which was designed to improve the behaviour and therefore attainment of children struggling at school, found that the large majority of both parents (79%) and teachers (80%) reported that the children’s difficulties were better at the end of the intervention. It also found that the consistency between teacher and parent reports suggested that behaviour was improving across multiple contexts – both in school and at home. The findings in relation to children’s subjective wellbeing and happiness suggested a small but significant trend towards increased wellbeing at the end of the programme.
  • St Joseph Hospice’s Compassionate Neighbours programme involves local people being trained and supported in their efforts to connect with people in their neighbourhood at risk of loneliness and isolation and then matched with a community member who is receiving hospice care. The evaluation identified positive outcomes from the programme for community members, their carers and the Compassionate Neighbours themselves. Benefits for the hospices were also identified.
  • Aesop’s Dance to Health programme was a nationwide falls-prevention dance programme run by Aesop for older people. The programme was designed with the intention of reducing older people’s falls and aimed to deliver health, artistic and social benefits plus savings to the health system. The evaluation found that Dance to Health is helping older people in danger of falling to experience improved confidence and independence and decreased isolation. It also found a notable reduction in the number of falls of older people involved in the programme, positive improvements in participants' physical and mental wellbeing and a reduced fear of falling among participants.
  • The Citizens UK's Parents and Communities Together programme ran two projects that were evaluated: ‘MumSpace’ and ‘Book Sharing’. The book-sharing course aims to equip the parent or carer with the knowledge of how to engage the child in interactive book-based play, while weekly MumSpace groups for parents with babies and toddlers are parent-led peer support groups. The evaluation of MumSpace found that the highest beneficial impact was found amongst mums on the programme who had the highest severity of anxiety and depressive symptoms. Increased parenting confidence was also found to be a positive impact of the programme. The evaluation of Book Sharing found that children across the book-sharing course showed significant improvement in language acquisition and understanding of vocabulary. Mums also self-reported an improvement in their parent-child relationship as a result of the book-sharing course.
  • Christian’s Against Poverty’s Life Skills programme was designed to help people live well on low income. The evaluation was focused on measuring the impact of the programme and developing improved measurement tools to help improve Christians Against Poverty’s evaluation capacity. Findings included that those running the centres generally felt well prepared and were able to implement the programme as intended. The programme also appeared to be reaching its intended audience of vulnerable people living on low incomes in deprived areas. It also suggested that members have a positive experience on the programme.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital's theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Authors

Sarah Mcloughlin

Sarah Mcloughlin

Sarah Mcloughlin

Senior Programme Manager

Sarah was a Senior Programme Manager.

View profile
Carrie Deacon

Carrie Deacon

Carrie Deacon

Director of Government and Community Innovation

Carrie was Director of Government and Community Innovation at Nesta, leading our work on social action and people-powered public services.

View profile
Annette Holman

Annette Holman

Annette Holman

Programme Manager, Government Innovation Team

Annette worked in the Government Innovation Team at Nesta focusing on social action priorities, specifically on the Connected Communites and Second Half Fund.

View profile

Naomi Jones

Naomi is an independent social research consultant who works with organisations to support their delivery of research and evaluation to help them to use evidence effectively for change.