Highlighting the experiences and approaches to evaluation and learning across the past three years of the Centre for Social Action Innovation Fund in partnership with the Department for Digital, Culture, Media & Sport (DCMS).

When focusing on creating lasting social change, capturing learning and understanding impact is critical to enable us to prioritise what we do and account for progress we make towards our goals. At Nesta, we place a great deal of importance on ensuring we, and the people we support, base their work on evidence and take the time to evaluate the impact of their work alongside developing learning about what did and didn’t work.

In this publication, we share findings and reflections across three categories of evaluating innovation programmes:

Early stage

Scaling stage

Larger or more complex evaluation

What is the Centre for Social Action Innovation Fund?

Over the last seven years, Nesta has led work to test and scale initiatives that draw from people powered solutions to help address some of the biggest challenges of our time. In partnership with the Department for Digital, Culture, Media and Sport (DCMS) we have delivered a series of funds, under the banner of the Centre for Social Action Innovation Fund (CSAIF), that harness and embed the power of people helping people alongside public services.

These funds have included: Second Half Fund, supporting the growth of innovations that mobilise the time and talents of people in the second half of their lives to help others, Early Years Social Action Fund, scaling innovations that help children to achieve developmental milestones by directly supporting parents, Savers Support Fund scaling innovations to improve money management skills and reduce debt for individuals and families, Click Connect Learn supporting innovations that use digital technology to enable volunteers to tutor pupils from disadvantaged backgrounds to improve their grades at school and Connected Communities Innovation Fund supporting innovations that mobilise many more people throughout the lifecourse, to support people and places to thrive.

We believe that in the future, the best public services will be people powered – designed to be more open, where each interaction creates connections, deliberately works to enable creative and active citizenship, and brings together professionals and the time and talents of local people to change communities and lives. And importantly, this will not be for 10 people, or 100 with specific needs, but embedded across the system as simply the way in which we operate.

We believe that in the future, the best public services will be people powered.

The value and benefits of learning and evaluation

In 2016 we shared the impact of the innovations supported in the first three years of our partnership with DCMS and now, after a further three years of work, supporting an additional 64 innovative organisations with over £10 million of funding and support, we have brought together further insights and learning from programmes that cut across policy areas from supporting families in the early years of a child's life, to educational attainment, or better health outcomes.

In sharing their impact and evaluative journey, we hope to not only provide a valuable repository of evidence about people powered programmes, but also give an insight into the value and benefits of learning and evaluation for projects as they test and scale new ideas and approaches.

Evaluation is effectively a planned, systematic approach to learning about what does or doesn’t work, how change happens, why, who for and in what way and the impact that a project or innovation is having. All grantees that we have supported through the first and second stage of the Centre for Social Action Innovation Fund in partnership with DCMS, were allocated funds to carry out some element of learning and evaluation activity. While the size, scale and purpose of this activity varied between programmes, the importance that Nesta placed on evaluation activity was consistent. In general, Nesta sees the benefits of learning and evaluation as enabling grantees to make the most of their programmes and to generate learning that will help them shape and refine what they do. However, the specific benefits of evaluation activity vary depending on the focus, stage and size of a programme.

Working with our grantees to develop an evidence base has been a five-stage process:

  1. Working with each grantee to develop a theory of change. A theory of change is a simple roadmap that identifies and links the needs, activities and desired outcomes and impact of a programme. This is an important process to work through because it helps us to understand our grantees’ aims and processes in more depth. It also provides grantees with a solid starting point to guide an effective evaluation.
  2. Assessing existing evidence. We work with our grantees to assess the evidence that they already have or that exists elsewhere. This helps us to understand our grantees’ evidence journey and to identify evidence gaps. It also helps us to understand the organisation's confidence in its programmes and where the work sits on our Standards of Evidence. The Standards of Evidence are designed to assess the strength of the available evidence in relation to the impact that a project, programme or intervention is having.
  3. Developing an evaluation plan. We support our grantees to develop an evaluation plan that will help them generate a useful evidence base. This is done by building on the Theory of Change, taking the grantees’ practical constraints into account and exploring what is most useful for them at their current stage of development.
  4. Selecting an evaluator. We work with our grantees to identify and commission a suitable evaluator. The type of evaluator that each grantee needs is different, depending on the scope and scale of their evaluation. Some require a fully independent evaluator to carry out the majority of the evaluation activity. Others look for an evaluation partner who can support their own evaluation activity and help them to build monitoring and evaluation capacity and skills or to utilise the skills and expertise already available in house. Each programme’s grant includes a specified amount for evaluation.
  5. Monitoring the evaluation process. At this stage we hand over the delivery of the evaluation to the appointed evaluators but continue to play a role as a critical friend.

Some of the finished evaluation reports can be found here. We have published a selection of the reports that demonstrate a range of approaches from across the funds.

There is no set approach to learning and evaluation and we supported our grantees to commission an evaluation approach that best suited their needs, capacity and budget.

  • For smaller or early stage innovations, it wasn’t necessarily appropriate to undertake a more complex evaluation so we supported them to find an evaluator who worked with them to focus on specific outcomes or areas of their project and to build their internal evaluation capacity and understanding. These evaluations often sought to explore both process (how) and impact (so what), but drew from approaches that focused on how those people engaging with the project perceived and experienced the benefits rather than seeking more objective measures. This included using surveys and qualitative research to explore the opinions of the project beneficiaries as well as the views of wider stakeholders connected to the project.
  • For more established programmes who were scaling there was often a higher evaluation budget and a larger need to demonstrate impact. Our grantees’ evaluators employed a range of methodologies to measure this. This included pre- and post- surveys with key beneficiaries or stakeholders. In some cases the evaluators used validated measures (questions that have been tested to ensure more reliable and accurate results). Standalone surveys were also used in some cases. And qualitative approaches such as depth interviews, focus groups or observation were commonly used to help understand the views underpinning specific outcomes and to draw more detail around process and impact. Although these evaluations included an element of impact measurement, they did not typically try and ascertain whether this impact could be objectively attributed to the programme in question.
  • Some of the grantees were further along on their evaluation journey. In some cases this meant that they had carried out evaluations before and so had a larger evidence base to build on. In others it meant that the evaluations they commissioned were larger or more complex and drew on a wider range of methodologies to measure impact. In two examples, the evaluators used a quasi-experimental design where a comparison group is created in order to help understand whether any impacts can be attributed to the programme. Another more complex methodology employed was Qualitative Comparative Analysis (QCA) to measure impact and identify which combination of conditions was more likely to generate positive outcomes.

Nesta actively encourages its grantees to include a focus on learning and evaluation because it has multiple benefits. These benefits fall into three categories.

1. Evidence helps social innovators to refine their approach and development by:

  • Helping them to be clearer about what their project or programmes focus and objectives should be.
  • Helping them to understand more about which elements of a project or programme are working and why.
  • Unpicking the impact that a project or programme is having on its intended beneficiaries and why.
  • Identifying which specific elements of a project or programme is facilitating change.
  • Helping them to understand what isn’t working so well and why.
  • Understanding whether a project or programme has any unintended consequences (positive or negative).
  • Helping to understand where to direct limited resources in the future so that the social innovator can make the most efficient and impactful choices.
  • Helping to build the skills and knowledge of staff or volunteers around evaluation.

2. Evidence helps social innovators to gain vital funding and support by:

  • Providing a compelling narrative for potential funders about why a project or programme is important, what about it works best and how it makes an impact.
  • Demonstrating to funders that they care about and understand evidence.
  • Allowing funders to compare projects and programmes to each other more effectively to make informed decisions.
  • Allowing existing funders to see the impact that their funding is making.

3. Evaluation helps social innovators to promote and disseminate their work by:

  • Generating evidence that helps people to understand the reach and impact of a project or programme.
  • Providing evidence and findings that can be tailored for different audiences.
  • Helping to tell their story more effectively.

If you’re considering undertaking an evaluation, then to make the most of it, it’s important to be clear about what you want to get from it. These questions may be helpful to consider:

  • Why are you doing an evaluation? What are the key drivers for your evaluation activity? How is it going to help you? How will you ultimately use the data that are collected? Knowing this will help refine your approach; if your primary driver is to demonstrate impact in order to seek increased funding then trying to focus on process too may be unrealistic. If you’re carrying out evaluation because you want to refine your project or programme then taking a more formative approach may be beneficial. In some cases social innovators commission an evaluation because their funder has requested it. However, while this might be the primary driver for an evaluation it is helpful to to consider what your own aspirations are for it and how you think it could benefit your project or programme.
  • What do you want to know? You may have a long list of things you want to find out, but it’s worth being realistic about what you can achieve with the resources available and to try and focus on a few key objectives so that you can get more meaningful information. Getting your objectives and research questions right takes time but is fundamental to designing the right approach to evaluation.
  • How much time and resource do you have available to give to learning and evaluation? Even if evaluation activity is carried out by independent evaluators it will be time consuming for you and your team. Keeping the design simple and focused will often reduce the amount of time you need to give to it. However, it’s important to think about who will manage the evaluation internally and to consider how this responsibility will sit with their existing role.
  • Who are the audiences for the evaluation outputs? An evaluation will help you to refine your project or programme and understand your impact. But you may also want to share your evaluation findings with others. If possible, it’s worth thinking upfront about who else is likely to use the evaluation data and what they’re likely to want or need to know. What sorts of outputs might you need from your evaluation to communicate it to these audiences effectively and is there the potential for findings to be adapted into different outputs for different audiences?
  • What are the chances that you’ll be able to do more evaluation in the future? If there is the potential for you to do more evaluation later on that might help to shape what you do now. Does it make sense for example to explore how well processes are working at this point and look at impact later? Or could you design this evaluation to create a baseline for future activity and work with an evaluator to create measures that could be returned to at a future date to understand progress?

In the following chapters, we provide guidance on how to approach evaluation no matter what stage you’re at on your evidence journey. The first page looks at evaluation for early stage innovations (those that have a strong idea and the culture and ambition to achieve impact), the second explores evaluation for scaling programmes (those that are being supported to reach more people and fulfill their potential and so achieve greater impact) and the third looks at evaluations for programmes that are further along on their evidence journey and have more evidence to build on or plan to attempt more complex evaluation. We have included examples and tips from across the Centre for Social Action Innovation Fund phase 2 and links to published reports from our grantees.

Authors

Sarah Mcloughlin

Sarah Mcloughlin

Sarah Mcloughlin

Senior Programme Manager

Sarah was a Senior Programme Manager.

View profile
Carrie Deacon

Carrie Deacon

Carrie Deacon

Director of Government and Community Innovation

Carrie was Director of Government and Community Innovation at Nesta, leading our work on social action and people-powered public services.

View profile
Annette Holman

Annette Holman

Annette Holman

Programme Manager, Government Innovation Team

Annette worked in the Government Innovation Team at Nesta focusing on social action priorities, specifically on the Connected Communites and Second Half Fund.

View profile

Naomi Jones

Naomi is an independent social research consultant who works with organisations to support their delivery of research and evaluation to help them to use evidence effectively for change.