How to evaluate social innovation to create lasting change

Highlighting the experiences and approaches to evaluation and learning across the past three years of the Centre for Social Action Innovation Fund in partnership with the Department for Digital, Culture, Media & Sport (DCMS).

When focusing on creating lasting social change, capturing learning and understanding impact is critical to enable us to prioritise what we do and account for progress we make towards our goals. At Nesta, we place a great deal of importance on ensuring we, and the people we support, base their work on evidence and take the time to evaluate the impact of their work alongside developing learning about what did and didn’t work.

In this publication, we share findings and reflections across three categories of evaluating innovation programmes:

Early stage

Scaling stage

Larger or more complex evaluation

What is the Centre for Social Action Innovation Fund?

Over the last seven years, Nesta has led work to test and scale initiatives that draw from people powered solutions to help address some of the biggest challenges of our time. In partnership with the Department for Digital, Culture, Media and Sport (DCMS) we have delivered a series of funds, under the banner of the Centre for Social Action Innovation Fund (CSAIF), that harness and embed the power of people helping people alongside public services.

These funds have included: Second Half Fund, supporting the growth of innovations that mobilise the time and talents of people in the second half of their lives to help others, Early Years Social Action Fund, scaling innovations that help children to achieve developmental milestones by directly supporting parents, Savers Support Fund scaling innovations to improve money management skills and reduce debt for individuals and families, Click Connect Learn supporting innovations that use digital technology to enable volunteers to tutor pupils from disadvantaged backgrounds to improve their grades at school and Connected Communities Innovation Fund supporting innovations that mobilise many more people throughout the lifecourse, to support people and places to thrive.

We believe that in the future, the best public services will be people powered – designed to be more open, where each interaction creates connections, deliberately works to enable creative and active citizenship, and brings together professionals and the time and talents of local people to change communities and lives. And importantly, this will not be for 10 people, or 100 with specific needs, but embedded across the system as simply the way in which we operate.

We believe that in the future, the best public services will be people powered.

The value and benefits of learning and evaluation

In 2016 we shared the impact of the innovations supported in the first three years of our partnership with DCMS and now, after a further three years of work, supporting an additional 64 innovative organisations with over £10 million of funding and support, we have brought together further insights and learning from programmes that cut across policy areas from supporting families in the early years of a child's life, to educational attainment, or better health outcomes.

In sharing their impact and evaluative journey, we hope to not only provide a valuable repository of evidence about people powered programmes, but also give an insight into the value and benefits of learning and evaluation for projects as they test and scale new ideas and approaches.

Evaluation is effectively a planned, systematic approach to learning about what does or doesn’t work, how change happens, why, who for and in what way and the impact that a project or innovation is having. All grantees that we have supported through the first and second stage of the Centre for Social Action Innovation Fund in partnership with DCMS, were allocated funds to carry out some element of learning and evaluation activity. While the size, scale and purpose of this activity varied between programmes, the importance that Nesta placed on evaluation activity was consistent. In general, Nesta sees the benefits of learning and evaluation as enabling grantees to make the most of their programmes and to generate learning that will help them shape and refine what they do. However, the specific benefits of evaluation activity vary depending on the focus, stage and size of a programme.

Working with our grantees to develop an evidence base has been a five-stage process:

  1. Working with each grantee to develop a theory of change. A theory of change is a simple roadmap that identifies and links the needs, activities and desired outcomes and impact of a programme. This is an important process to work through because it helps us to understand our grantees’ aims and processes in more depth. It also provides grantees with a solid starting point to guide an effective evaluation.
  2. Assessing existing evidence. We work with our grantees to assess the evidence that they already have or that exists elsewhere. This helps us to understand our grantees’ evidence journey and to identify evidence gaps. It also helps us to understand the organisation's confidence in its programmes and where the work sits on our Standards of Evidence. The Standards of Evidence are designed to assess the strength of the available evidence in relation to the impact that a project, programme or intervention is having.
  3. Developing an evaluation plan. We support our grantees to develop an evaluation plan that will help them generate a useful evidence base. This is done by building on the Theory of Change, taking the grantees’ practical constraints into account and exploring what is most useful for them at their current stage of development.
  4. Selecting an evaluator. We work with our grantees to identify and commission a suitable evaluator. The type of evaluator that each grantee needs is different, depending on the scope and scale of their evaluation. Some require a fully independent evaluator to carry out the majority of the evaluation activity. Others look for an evaluation partner who can support their own evaluation activity and help them to build monitoring and evaluation capacity and skills or to utilise the skills and expertise already available in house. Each programme’s grant includes a specified amount for evaluation.
  5. Monitoring the evaluation process. At this stage we hand over the delivery of the evaluation to the appointed evaluators but continue to play a role as a critical friend.

Some of the finished evaluation reports can be found here. We have published a selection of the reports that demonstrate a range of approaches from across the funds.

There is no set approach to learning and evaluation and we supported our grantees to commission an evaluation approach that best suited their needs, capacity and budget.

  • For smaller or early stage innovations, it wasn’t necessarily appropriate to undertake a more complex evaluation so we supported them to find an evaluator who worked with them to focus on specific outcomes or areas of their project and to build their internal evaluation capacity and understanding. These evaluations often sought to explore both process (how) and impact (so what), but drew from approaches that focused on how those people engaging with the project perceived and experienced the benefits rather than seeking more objective measures. This included using surveys and qualitative research to explore the opinions of the project beneficiaries as well as the views of wider stakeholders connected to the project.
  • For more established programmes who were scaling there was often a higher evaluation budget and a larger need to demonstrate impact. Our grantees’ evaluators employed a range of methodologies to measure this. This included pre- and post- surveys with key beneficiaries or stakeholders. In some cases the evaluators used validated measures (questions that have been tested to ensure more reliable and accurate results). Standalone surveys were also used in some cases. And qualitative approaches such as depth interviews, focus groups or observation were commonly used to help understand the views underpinning specific outcomes and to draw more detail around process and impact. Although these evaluations included an element of impact measurement, they did not typically try and ascertain whether this impact could be objectively attributed to the programme in question.
  • Some of the grantees were further along on their evaluation journey. In some cases this meant that they had carried out evaluations before and so had a larger evidence base to build on. In others it meant that the evaluations they commissioned were larger or more complex and drew on a wider range of methodologies to measure impact. In two examples, the evaluators used a quasi-experimental design where a comparison group is created in order to help understand whether any impacts can be attributed to the programme. Another more complex methodology employed was Qualitative Comparative Analysis (QCA) to measure impact and identify which combination of conditions was more likely to generate positive outcomes.

Nesta actively encourages its grantees to include a focus on learning and evaluation because it has multiple benefits. These benefits fall into three categories.

1. Evidence helps social innovators to refine their approach and development by:

  • Helping them to be clearer about what their project or programmes focus and objectives should be.
  • Helping them to understand more about which elements of a project or programme are working and why.
  • Unpicking the impact that a project or programme is having on its intended beneficiaries and why.
  • Identifying which specific elements of a project or programme is facilitating change.
  • Helping them to understand what isn’t working so well and why.
  • Understanding whether a project or programme has any unintended consequences (positive or negative).
  • Helping to understand where to direct limited resources in the future so that the social innovator can make the most efficient and impactful choices.
  • Helping to build the skills and knowledge of staff or volunteers around evaluation.

2. Evidence helps social innovators to gain vital funding and support by:

  • Providing a compelling narrative for potential funders about why a project or programme is important, what about it works best and how it makes an impact.
  • Demonstrating to funders that they care about and understand evidence.
  • Allowing funders to compare projects and programmes to each other more effectively to make informed decisions.
  • Allowing existing funders to see the impact that their funding is making.

3. Evaluation helps social innovators to promote and disseminate their work by:

  • Generating evidence that helps people to understand the reach and impact of a project or programme.
  • Providing evidence and findings that can be tailored for different audiences.
  • Helping to tell their story more effectively.

If you’re considering undertaking an evaluation, then to make the most of it, it’s important to be clear about what you want to get from it. These questions may be helpful to consider:

  • Why are you doing an evaluation? What are the key drivers for your evaluation activity? How is it going to help you? How will you ultimately use the data that are collected? Knowing this will help refine your approach; if your primary driver is to demonstrate impact in order to seek increased funding then trying to focus on process too may be unrealistic. If you’re carrying out evaluation because you want to refine your project or programme then taking a more formative approach may be beneficial. In some cases social innovators commission an evaluation because their funder has requested it. However, while this might be the primary driver for an evaluation it is helpful to to consider what your own aspirations are for it and how you think it could benefit your project or programme.
  • What do you want to know? You may have a long list of things you want to find out, but it’s worth being realistic about what you can achieve with the resources available and to try and focus on a few key objectives so that you can get more meaningful information. Getting your objectives and research questions right takes time but is fundamental to designing the right approach to evaluation.
  • How much time and resource do you have available to give to learning and evaluation? Even if evaluation activity is carried out by independent evaluators it will be time consuming for you and your team. Keeping the design simple and focused will often reduce the amount of time you need to give to it. However, it’s important to think about who will manage the evaluation internally and to consider how this responsibility will sit with their existing role.
  • Who are the audiences for the evaluation outputs? An evaluation will help you to refine your project or programme and understand your impact. But you may also want to share your evaluation findings with others. If possible, it’s worth thinking upfront about who else is likely to use the evaluation data and what they’re likely to want or need to know. What sorts of outputs might you need from your evaluation to communicate it to these audiences effectively and is there the potential for findings to be adapted into different outputs for different audiences?
  • What are the chances that you’ll be able to do more evaluation in the future? If there is the potential for you to do more evaluation later on that might help to shape what you do now. Does it make sense for example to explore how well processes are working at this point and look at impact later? Or could you design this evaluation to create a baseline for future activity and work with an evaluator to create measures that could be returned to at a future date to understand progress?

In the following chapters, we provide guidance on how to approach evaluation no matter what stage you’re at on your evidence journey. The first page looks at evaluation for early stage innovations (those that have a strong idea and the culture and ambition to achieve impact), the second explores evaluation for scaling programmes (those that are being supported to reach more people and fulfill their potential and so achieve greater impact) and the third looks at evaluations for programmes that are further along on their evidence journey and have more evidence to build on or plan to attempt more complex evaluation. We have included examples and tips from across the Centre for Social Action Innovation Fund phase 2 and links to published reports from our grantees.

Early stage innovations evaluation

In the early stages of developing an innovation we often don’t know how plans will progress.

This means it's important to treat it like a work in progress, tweaking and tinkering to improve it as you learn more. This can allow organisations to explore new solutions, reducing wasted time and resources on initiatives that do not work. Whilst thinking about evaluation may not be high up on the list of priorities when first developing your idea, developing a way to learn and understand what is and isn’t working is really worthwhile.

A good early stage evaluation is one that is well defined, has a limited focus and a small number of research questions, does not attempt to be too complex and is transparent about what worked well and what did not, in order to inform future plans, research and evaluation. Ideally, it will also sit alongside the project and formatively feed data in to help the innovation learn, shape and grow effectively.

Top tips from our funds for good practice in early stage evaluation

You may not yet have much data of your own but there could well be other evidence from similar projects or even from an adjacent field. Take a look here at a Rapid Evidence Review on mentoring that was carried out to help inform the programme design of several of the innovations funded that used mentoring as their main delivery method.

Time and resource are likely to be scarce so it can be helpful to keep any evaluation activity relatively narrow. For the evaluation of ‘Library of Things’, the staff team worked with their evaluators to identify which elements of their theory of change to focus on for the evaluation. It would not have been possible to cover them all and would have spread limited resources too thinly so they focused on a selection with a view to potentially revisiting some of their other outcomes in the future.

The team reflected that being focused in this way meant that whilst they did not yet have a comprehensive impact measurement framework that fully captures impact across all their sites they have “simplified our theory of change and created an underpinning measurement framework that is informing the design of our new software and borrowing experience”.

It might be tempting to try and explore the impact that your innovation is having and how well the processes are working but doing both these things well is likely to be challenging. The evaluation of Tutorfair On-Demand focused largely on impact and understanding how well the programme’s tutoring app was or wasn’t delivering the intended benefits. By keeping the focus relatively narrow, the Tutorfair Foundation was able to work with its evaluation team to explore how well the programme was delivering by using a range of approaches.

Things rarely go completely to plan during an evaluation. The important thing is to be clear and transparent about any changes you have had to make and the likely impact they might have on the data. For the evaluation of Neighbourhood Watch Network’s Communities that Care project, the original evaluation approach had included a pre- and post- survey with beneficiaries of the project. However, this needed volunteers to deliver it and some of them were not comfortable with administering the survey and so that element of the evaluation was dropped, and greater emphasis was placed on the qualitative elements of the design instead which resulted in rich and useful data.

All three early stage evaluations that we have published provided grantees with valuable evidence about their innovations. But they also highlighted what else the grantee could do in the future to monitor and evaluate their programme further. An early stage evaluation can build the evidence base considerably but it won’t ever answer all the questions you might have. It can be useful to use an evaluation to help you plan what learning and evaluation you might want to do in the future and to build that into any future funding bids. The team from Neighbourhood Watch’s Communities that Care project reflected on what it learnt from undertaking some initial evaluation activity:

“We have learnt from our mistakes in not collecting more data about our volunteers early on. Similarly, to be clear from the outset about the part that our volunteers will play in the evaluation through their own data collection and make this part of the volunteer role. We have seen the benefits of evaluation in terms of volunteer engagement and being able to demonstrate to them the value of their activity. We have also seen the benefit in ensuring that all those involved; volunteers, stakeholders and beneficiaries, are included in the evaluation to gain a rounded view of the successes of the project and the points we can learn from.”

Challenges and successes

Some of the challenges that our early stage grantees faced are outlined below along with examples of their approaches to mitigate them:

1. Lack of time and resource

Illustration of the 100 day challenge - innovation methods

Challenge All evaluation activity takes lots of time to plan and set up. This might include getting monitoring arrangements in place, thinking through data protection issues, contacting people to take part in research or briefing staff or volunteers on data collection approaches. For smaller innovations with limited resources this can be hard to manage. Planning evaluation activity ahead of time can allow for resources to be booked in. Social innovators of all sizes also find that they need to make pragmatic choices about which types of evaluation activity to focus on to ensure that resources are not too diverted from delivering their main activity.

Success The team at Library of Things found the amount of time that the evaluation took to be really challenging and had expected their learning partners to deliver more of the evaluation activities. However, since the evaluation had a limited budget, they needed to carry out some of the research themselves. This was challenging as they were a small team trying to develop and deliver their innovation. Instead they appointed a student volunteer to carry out some of the research for them and made arrangements for their in-house data analyst to work alongside her. The student was able to use the research for her PhD and the staff team got some extra capacity. It was an imperfect solution as the student was not experienced in research but it allowed Library of Things to both increase its evidence base and its evaluation skills while keeping the impact on its resources manageable. Library of Things team concluded that the upshot to taking this approach was that “Library of Things data analyst Mirela has developed new skills in impact analysis and now feels more able to lead on this work”.

2. Having the necessary skills and knowledge to commission or undertake evaluation

Nesta_Web_CIM_Thumbs_EXTRA.png

Challenge Doing evaluation well takes knowledge about which approaches work in which situations and how to apply them, evaluation also comes with its own terminology and a lot of the jargon that surrounds it can be off-putting to early stage innovations that haven’t encountered the evaluation world before. Among the grantees we supported there were varying levels of knowledge of research and evaluation.

Success Nesta’s approach to funding evaluation activity includes the possibility of evaluators acting as ‘learning partners’ in order to build the skills and capacity of smaller innovations so that they can not only deliver evaluation activity during the term of their grant but develop the skills to allow them to carry out learning and evaluation activity during and beyond the funding. The Communities that Care project run by Neighbourhood Watch was one of the early stage innovations who appointed a learning partner. As part of the contract, the learning partner trained the project team in recruiting for and carrying out depth interviews. The staff team went on to successfully recruit and complete nine depth interviews using their new skills. The Neighbourhood Watch team reflected that:

“Training in in–depth interviewing would not have been something we would previously have thought necessary, but the input from our evaluation partner about the importance of this and the feedback from those who participated in the training, confirmed that the skills they were trained in were necessary to conduct effective qualitative interviews. The training gave our Community Engagement Manager the confidence to conduct the interviews with the beneficiaries and she found this a rewarding experience personally as she was able to gather positive feedback about the impact of the project she had been running as well as some valuable learning for future projects.”

3. Finding the right evaluator for your budget

Innovation-Methods-impact-investment.png

Challenge Finding an evaluation partner to either carry out the work on your behalf or to support you in carrying out evaluation activities can be challenging. Not only do you need someone with the necessary skills and experience, but they also need to be someone that you can work closely with successfully. When you’re evaluating an early stage innovation your budget is likely to be lower too so cost is key.

Success Nesta always supports its grantees to identify and appoint an evaluator. For the evaluation of Tutorfair On-Demand, the staff team initially approached an evaluation team from a leading university. However, a pilot of the evaluation team’s suggested approach identified challenges and the Tutorfair Foundation decided to look for a new evaluator. With Nesta’s help it appointed The Social Innovation Partnership (TSIP) who were able to develop an approach and a final report that Tutorfair was happy with. While the Tutorfair team felt that the process of identifying and working with an evaluator had been time consuming, they felt it was worthwhile.

“Although it was quite time-intensive, it was undoubtedly less time-intensive than evaluating the project ourselves, and we felt it was beneficial in several significant ways to collaborate closely with an external partner throughout the grant cycle.”

The Tutorfair team

Early stage innovations case study findings

  • The evaluation of the Tutorfair Foundation’s Tutorfair On-Demand project which offered one-to-one maths tuition through an app found that, in general, the app increases access to GCSE maths tuition. It also found that teachers and experienced tutors were positive about the app and the quality of tuition and students were also positive about tuition quality, although not always to the same extent.
  • The evaluation of Library of Things, a lending library which was piloting in South London before rolling out to other areas, found that Library of Things increases access to low cost, high quality items, it enables people to develop skills; become more community focused; and more environmentally minded.
  • The evaluation of Neighborhood Watch Network’s Communities that Care project found that the project demonstrated the effectiveness of a locally led, volunteer driven programme in addressing both older people’s experience of fraud and the anxiety it causes. It made a difference to the communities that it operated within. Volunteers were enabled to deliver effective fraud prevention advice, and the partners and stakeholders recognised the value of the work.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital (NPC)’s theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Scaling innovations evaluation

Innovations can be considered to be scaling when they are working to reach more people and achieving greater impact.

As an innovation grows and works with more people, there is a greater imperative for leaders of an innovation to know that it is working and is making the biggest difference possible to the aims of the organisation. It is essential that we take time to know that the work is not, accidentally, doing harm, in spite of our best intentions.

A good evaluation for a larger or scaling programme is one that builds squarely on previous evidence and research, has clear objectives and research questions and is transparent about what is and isn’t possible to achieve.

Top tips from our funds when evaluating scaling programmes

It’s worth taking some time to look at what evidence already exists within your sector or in an adjacent field. Seek out the lessons that any similar scaling projects have learned and look closely at any data you have already collected. Consider what it tells you and where the gaps are as well as how you might collect any evidence (including monitoring data) better. The evaluation of Christians Against Poverty’s Life Skills programme explicitly included a review of research that had taken place previously on the monetary and social impact of Christians Against Poverty’s wider services. This enabled the evaluators both to better understand the programme and to build on the existing research to ensure that the evaluation addressed remaining gaps.

If you’re scaling your programme, then you may well want to explore how your programme is operating, how well beneficiaries are being reached and the extent to which the actual delivery is aligning with what was planned. This would suggest that a process evaluation would be most useful. However, you may also want to know about the impact of your programme and the extent to which it has achieved its desired outcomes. While it is possible to include elements of process and impact in an evaluation this will usually involve compromises to the depth or quality of the evidence that can be collected, so it’s worth taking time to be really clear about your key priorities and keep them focused. Transforming Lives for Good’s (TLG) Early Intervention programme is designed to improve the behaviour of children who are struggling at school in order to raise attainment and reduce the risk of truancy and exclusion. The staff and evaluation teams decided to focus on impact for the evaluation. Because the research relied heavily on responses from parents and children there were challenges with questionnaires not being completed and missing data. If the evaluation team had focused on process too this may well have exacerbated the challenges that they faced in collecting data from parents and potentially compromised the quality of the impact data.

The gold standard for impact evaluation usually aims to attribute change to an intervention or programme by establishing some kind of control or counterfactual to enable the evaluators to understand what would have happened if the programme had not existed. However, evaluations that include a control tend to be very complex and much more expensive so it’s important to be realistic about what is possible. It is still possible to understand impact without a control by taking a more realist approach and exploring the extent to which people involved with a programme perceive and experience there to be impacts rather than aiming to measure the impacts more objectively. The evaluation of St Joseph Hospice’s ‘Compassionate Neighbours’ where local people are trained and supported in their efforts to connect with people in their neighbourhood at risk of loneliness and isolation particularly those with a life limiting illness, focused on exploring the experiences of both the Compassionate Neighbours and the community involved in the programme. The team commissioned this evaluation to build on previous evaluations that had been done to enhance their insights into the programme, such as a PhD study and a national evaluation of similar programmes. While taking this approach meant that they could not objectively attribute impact to the programme, the team was able to explore the different types of value that the programme created and generate useful evidence about the perceived impact.

To develop and implement a good evaluation, think about the time and resources it will require. Even the best evaluators will need someone from your programme to manage them and make sure that the evaluation is going the way that you planned. And a lot of learning and evaluation activities will require staff teams to get involved - to track progress, to hand over existing data, to review research materials and other outputs or to contact beneficiaries or stakeholders to ask them to take part in the research. Since the internal resources required can be considerable, Nesta sometimes provides funding to pay for grantees to have internal roles to manage learning and evaluation activities so it’s always worth having that conversation if you’re concerned about the internal resources that an evaluation will require.

It can be hard to convince staff and volunteers of the benefits of evaluation, particularly as some are not evident until later in the process when data has been collected.

Challenges and successes

Some of the challenges that our scaling innovations faced are outlined below along with examples of their approaches to mitigate them:

1. Accessing participants

Innovation-Methods-impact-investment.png

Challenge Programmes of all types can struggle to access the people who are involved with it to research their views. However, this challenge is magnified when a programme is working with a wide range of people. There are many ethical considerations to work through and researchers are often reliant on ‘gatekeepers’ to put them in touch with potential participants.

Success The evaluation team for Compassionate Neighbours sought the views of local community members receiving hospice care who had been matched with a Compassionate Neighbour. They found that gaining access to and collecting information from community members was difficult as some hospices were reluctant to refer community members or their carers to be interviewed. The evaluation team got around this to some degree by using peer evaluators (volunteers, many of whom were Compassionate Neighbours) who helped to access and collect data from community members and support data entry across the hospices.

2. Engaging staff and volunteers with the evaluation

Accelerator programmes illustration - innovation methods

Challenge It can be hard to convince staff and volunteers of the benefits of evaluation, particularly as some are not evident until later in the process when data has been collected. Staff and volunteers often have a fundamental role in the data collection process and since this sits on top of other duties to deliver the programme it can be hard for evaluation activities to be given priority.

Success Aesop tackled this challenge head on during the delivery of Dance to Health by ensuring that monitoring was built into the programme from the beginning. In its recruitment of dance artists to lead the groups, Aesop ensured that the responsibility of data collection was included in their job description. The evaluators had set out clear questions they needed to know from the participants at the start of the programme so these were asked at the beginning of each group session before the dancing began, so it became the norm for everyone involved.

3. Low response rates among participants

Challenge prizes illustration - innovation methods

Challenge In order to have robust and reliable data in evaluation it is important to try and get a good response to surveys and requests for other research interaction. However, this can be very challenging to achieve especially when people are facing challenges or feeling over-researched.

Success The team evaluating TLG's Early Intervention programme which was designed to improve the behaviour of children who are struggling at school, struggled with getting survey responses from parents, which is a common problem. While they were not able to solve this challenge directly, they ensured that they were very clear in the report about the impact that the missing data may have had on the evaluation results and clearly identified it as a research gap that should be a key focus of any future research of the programme.

The findings in relation to children’s subjective wellbeing and happiness suggested a small but significant trend towards increased wellbeing at the end of the programme.

Scaling innovations case study findings

  • The evaluation of TLG’s Early Intervention programme, which was designed to improve the behaviour and therefore attainment of children struggling at school, found that the large majority of both parents (79%) and teachers (80%) reported that the children’s difficulties were better at the end of the intervention. It also found that the consistency between teacher and parent reports suggested that behaviour was improving across multiple contexts – both in school and at home. The findings in relation to children’s subjective wellbeing and happiness suggested a small but significant trend towards increased wellbeing at the end of the programme.
  • St Joseph Hospice’s Compassionate Neighbours programme involves local people being trained and supported in their efforts to connect with people in their neighbourhood at risk of loneliness and isolation and then matched with a community member who is receiving hospice care. The evaluation identified positive outcomes from the programme for community members, their carers and the Compassionate Neighbours themselves. Benefits for the hospices were also identified.
  • Aesop’s Dance to Health programme was a nationwide falls-prevention dance programme run by Aesop for older people. The programme was designed with the intention of reducing older people’s falls and aimed to deliver health, artistic and social benefits plus savings to the health system. The evaluation found that Dance to Health is helping older people in danger of falling to experience improved confidence and independence and decreased isolation. It also found a notable reduction in the number of falls of older people involved in the programme, positive improvements in participants' physical and mental wellbeing and a reduced fear of falling among participants.
  • The Citizens UK's Parents and Communities Together programme ran two projects that were evaluated: ‘MumSpace’ and ‘Book Sharing’. The book-sharing course aims to equip the parent or carer with the knowledge of how to engage the child in interactive book-based play, while weekly MumSpace groups for parents with babies and toddlers are parent-led peer support groups. The evaluation of MumSpace found that the highest beneficial impact was found amongst mums on the programme who had the highest severity of anxiety and depressive symptoms. Increased parenting confidence was also found to be a positive impact of the programme. The evaluation of Book Sharing found that children across the book-sharing course showed significant improvement in language acquisition and understanding of vocabulary. Mums also self-reported an improvement in their parent-child relationship as a result of the book-sharing course.
  • Christian’s Against Poverty’s Life Skills programme was designed to help people live well on low income. The evaluation was focused on measuring the impact of the programme and developing improved measurement tools to help improve Christians Against Poverty’s evaluation capacity. Findings included that those running the centres generally felt well prepared and were able to implement the programme as intended. The programme also appeared to be reaching its intended audience of vulnerable people living on low incomes in deprived areas. It also suggested that members have a positive experience on the programme.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital's theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Larger or more complex evaluation

Some innovations may require more complex evaluations, either because of the nature of the work, the context of the field it operates in, or as it scales to work with many more people.

A programme in this category may want to build on existing evidence to fill any gaps, or may be looking to use more complex methodologies to draw out richer detail or attempt to objectively measure impact. Or having looked at impact before, it may be seeking to understand which elements of its process are or aren't working in more depth. More complex evaluations tend to take longer and be more expensive. Therefore a good evaluation at this stage is one that has a really well thought-out design that builds on existing evidence, takes account of previous challenges and has a clear plan for overcoming potential future issues.

Top tips when evaluating larger or more complex programmes

If you’re aiming to commission a more complex evaluation that measures impact by establishing a counterfactual, then you need to ensure that your evaluation team has done similar evaluations in the past. The skills needed to carry out a randomised control trial (where participants are randomly assigned to the intervention or a control group) or quasi experimental design (where a comparison group is established in other ways and there is no randomisation), are very specific and these sorts of evaluations should only be undertaken by social scientists with considerable experience who are familiar not only with the necessary techniques but also the potential pitfalls and how best to avoid them.

All evaluations require a degree of project management by the staff team commissioning them but this need is increased as the evaluation size and complexity grows. If you’re planning to commission a large evaluation it is worth considering formally allocating part of a staff member’s role to managing it. For more complex evaluations it may also be worth considering convening a small advisory group ideally including people who have research and evaluation skills. This can not only help ensure that you get the most out of the evaluation, but can play a role in linking the data from your evaluation to the wider evidence base. The evaluation of ‘Kinship Connected’ for Grandparents Plus illustrates just how much project management and engagement from the grantee can be required. Early on in the process, the evaluator led workshop sessions with project workers, who were ultimately the ones who would be gathering the evaluation data. These workshops focused on the approach and the questions that would be used and how. The project team at Grandparents Plus also invested in a new Customer Relationship Management tool and employed someone to develop this and monitor the data to ensure that the evaluator had what was needed. They also set up regular meetings with their evaluator and the Nesta programme manager that ran throughout the programme to ensure that the evaluation remained on track.

This applies to all types of evaluation but is particularly salient when a more complex evaluation has been commissioned. If more complex evaluation methods have been used, their write-up may require more detailed explanation which can be off-putting to some audiences. So it can be helpful to decide early on who the evaluation outputs will be aimed at and to consider the format they should take. In some cases it may make sense to have more than one output aimed at different audiences, and deciding this upfront will enable the evaluation team to focus on how the final data will be presented. For the evaluation of In2ScienceUK for example, the evaluation team produced a detailed impact report. The In2ScienceUK team then also produced their own summary of the report and made both reports available on their impact page which includes a short infographic video to summarise their impact over the last year.

Challenges and successes

1. More complex evaluations can mean reporting can be technical and harder to understand for some audiences

complex evaluations can mean reporting can be technical and harder to understand

Challenge The methodologies that are used for more complex or detailed evaluations can be more technical and demand more social science language when written up. This is often necessary to ensure that the various elements of the evaluation are explained and written up in a thorough and transparent way, however it can be very off-putting for readers, even those who have a good knowledge of social research.

Success: The evaluation of the Grandmentors programme included some more complex methodologies and resulted in a long and relatively technical report. To get around this, the evaluators also produced a clear standalone executive summary which covered the key elements of the evaluation and its key findings to make them more easily digestible.

2. Inconsistencies between sites

Inconsistency between sites

Challenge Larger programmes may have scaled to operate in more than one location. While this can present a lot of opportunities for evaluation, it can also be challenging if there are inconsistencies between locations as that can make direct comparison difficult.

Success: The Empowering Places, Empowering Communities (EPEC) programme from South London and Maudsley NHS Trust had previously had a number of impact evaluations and since it was rolling out multiple local hubs, it chose to commission a process evaluation in order to explore how the variations across the teams affected their impact.

3. Creating a useful control group

Innovation-methods-innovation-mapp.2e16d0ba.fill-600x320.png

Challenge In order to be able to attribute impact to a project, it is necessary to compare the results of that programme to a counterfactual to understand what would have happened if the innovation had not taken place. This can be done through the creation of a control group but it can be hugely challenging especially if evaluators are trying to determine the impact of a programme that has already taken place.

Success: For the evaluation of the In2ScienceUK project, the evaluation team needed to explore the impact of the project on two different cohorts of young people from 2018 and 2019. For the 2019 group, they were able to collect data from a comparison group composed of project applicants who were interviewed but were not selected to participate due to a lack of sufficient placement or scheduling conflicts. However, in 2018 the project did not collect data from a comparison group at either the baseline or follow-up. Therefore, the staff team created an artificial comparison group for that cohort by surveying those who applied but did not take part after the programme had taken place. This is a complex process that has a number of limitations but a useful way of understanding potential impact.

“The external evaluation was tremendously important for the charity. It validated our impact measurements, enabled us to compare the impact of our beneficiaries against a control group and spend substantial time really reflecting on impact and how we improve this in our organisation.”

The In2ScienceUK team

Larger or more complex case study findings

  • The EPEC programme from South London and Maudsley NHS Trust is a parent-led parenting programme designed to offer parents support to improve a range of outcomes for children and families. The EPEC team undertook both an internal programme review of the national scaling programme and an external process evaluation building on existing impact evaluations. The internal review found that the scaling programme was a robust and successful test of the capability to deliver EPEC at scale and that Being a Parent courses were consistently highly effective, with clear impact on child, parents and family outcomes. The external process evaluation identified how well the variations across the teams worked and drew out some positive findings in relation to team effectiveness.
  • The In2ScienceUK programme aims to tackle the issue of fewer young people from the lowest income backgrounds progressing to university, having a STEM career or becoming economically stable than more affluent groups. It does this by leveraging the expertise and passion of local scientists, engineers, and technology and maths professionals through work placements and mentoring, workshops and skills days. The evaluation found that participation in the In2scienceUK programme primarily increased students’ confidence in their abilities, improved their understanding of career routes into STEM and provided them with contacts that could offer them advice in the university application process.
  • The Grandmentors programme from Volunteering Matters delivers intergenerational mentoring projects for young people transitioning from care. Its evaluation found that young people who participate in the programme see positive changes in their lives in terms of improved education, employment and training (EET) outcomes.
  • The Grandparents Plus programme, Kinship Connected, provides support to kinship carers. The evaluation found that Kinship Connected made a range of positive impacts; isolation was reduced and concerns regarding children were reduced as many carers gained an increased understanding of their children’s behaviour. With the impact of peer-to-peer support, kinship carers felt a sense of connectedness with others and from this, a real resilience to cope, and pride in their caring role. The cost-benefit analysis also identified that for every £1 invested in the programme, £1.20 of benefits was estimated to be generated. This equates to a 20% rate of return.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital (NPC)’s theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Summary evaluation reports

Here you will find the full evaluation reports, each with a summary.

Early stage programme evaluation reports

Library of Thing - download report

Neighbourhood Watch Network's Communities That Care - download report

Tutorfair Foundation’s Tutorfair On-Demand - download report

Scaling programmes evaluation reports

Aesop's Dance to Health - download report

Christians Against Poverty's Life Skills - download report

Citizens UK's Parents and Communities Together - download report

St Joseph Hospice’s Compassionate Neighbours - download report

Transforming Lives for Good's Early Intervention - download report

Larger or more complex evaluation reports

In2scienceUK - download report

South London and Maudsley NHS Trust Empowering Parents, Empowering Communities - download report

Volunteering Matters Grandmentors - download report

Grandparents Plus Kinship Connected - download report

Authors

Sarah Mcloughlin

Sarah Mcloughlin

Sarah Mcloughlin

Senior Programme Manager

Sarah was a Senior Programme Manager.

View profile
Carrie Deacon

Carrie Deacon

Carrie Deacon

Director of Government and Community Innovation

Carrie was Director of Government and Community Innovation at Nesta, leading our work on social action and people-powered public services.

View profile
Annette Holman

Annette Holman

Annette Holman

Programme Manager, Government Innovation Team

Annette worked in the Government Innovation Team at Nesta focusing on social action priorities, specifically on the Connected Communites and Second Half Fund.

View profile

Naomi Jones

Naomi is an independent social research consultant who works with organisations to support their delivery of research and evaluation to help them to use evidence effectively for change.