In the early stages of developing an innovation we often don’t know how plans will progress.

This means it's important to treat it like a work in progress, tweaking and tinkering to improve it as you learn more. This can allow organisations to explore new solutions, reducing wasted time and resources on initiatives that do not work. Whilst thinking about evaluation may not be high up on the list of priorities when first developing your idea, developing a way to learn and understand what is and isn’t working is really worthwhile.

A good early stage evaluation is one that is well defined, has a limited focus and a small number of research questions, does not attempt to be too complex and is transparent about what worked well and what did not, in order to inform future plans, research and evaluation. Ideally, it will also sit alongside the project and formatively feed data in to help the innovation learn, shape and grow effectively.

Top tips from our funds for good practice in early stage evaluation

You may not yet have much data of your own but there could well be other evidence from similar projects or even from an adjacent field. Take a look here at a Rapid Evidence Review on mentoring that was carried out to help inform the programme design of several of the innovations funded that used mentoring as their main delivery method.

Time and resource are likely to be scarce so it can be helpful to keep any evaluation activity relatively narrow. For the evaluation of ‘Library of Things’, the staff team worked with their evaluators to identify which elements of their theory of change to focus on for the evaluation. It would not have been possible to cover them all and would have spread limited resources too thinly so they focused on a selection with a view to potentially revisiting some of their other outcomes in the future.

The team reflected that being focused in this way meant that whilst they did not yet have a comprehensive impact measurement framework that fully captures impact across all their sites they have “simplified our theory of change and created an underpinning measurement framework that is informing the design of our new software and borrowing experience”.

It might be tempting to try and explore the impact that your innovation is having and how well the processes are working but doing both these things well is likely to be challenging. The evaluation of Tutorfair On-Demand focused largely on impact and understanding how well the programme’s tutoring app was or wasn’t delivering the intended benefits. By keeping the focus relatively narrow, the Tutorfair Foundation was able to work with its evaluation team to explore how well the programme was delivering by using a range of approaches.

Things rarely go completely to plan during an evaluation. The important thing is to be clear and transparent about any changes you have had to make and the likely impact they might have on the data. For the evaluation of Neighbourhood Watch Network’s Communities that Care project, the original evaluation approach had included a pre- and post- survey with beneficiaries of the project. However, this needed volunteers to deliver it and some of them were not comfortable with administering the survey and so that element of the evaluation was dropped, and greater emphasis was placed on the qualitative elements of the design instead which resulted in rich and useful data.

All three early stage evaluations that we have published provided grantees with valuable evidence about their innovations. But they also highlighted what else the grantee could do in the future to monitor and evaluate their programme further. An early stage evaluation can build the evidence base considerably but it won’t ever answer all the questions you might have. It can be useful to use an evaluation to help you plan what learning and evaluation you might want to do in the future and to build that into any future funding bids. The team from Neighbourhood Watch’s Communities that Care project reflected on what it learnt from undertaking some initial evaluation activity:

“We have learnt from our mistakes in not collecting more data about our volunteers early on. Similarly, to be clear from the outset about the part that our volunteers will play in the evaluation through their own data collection and make this part of the volunteer role. We have seen the benefits of evaluation in terms of volunteer engagement and being able to demonstrate to them the value of their activity. We have also seen the benefit in ensuring that all those involved; volunteers, stakeholders and beneficiaries, are included in the evaluation to gain a rounded view of the successes of the project and the points we can learn from.”

Challenges and successes

Some of the challenges that our early stage grantees faced are outlined below along with examples of their approaches to mitigate them:

1. Lack of time and resource

Illustration of the 100 day challenge - innovation methods

Challenge All evaluation activity takes lots of time to plan and set up. This might include getting monitoring arrangements in place, thinking through data protection issues, contacting people to take part in research or briefing staff or volunteers on data collection approaches. For smaller innovations with limited resources this can be hard to manage. Planning evaluation activity ahead of time can allow for resources to be booked in. Social innovators of all sizes also find that they need to make pragmatic choices about which types of evaluation activity to focus on to ensure that resources are not too diverted from delivering their main activity.

Success The team at Library of Things found the amount of time that the evaluation took to be really challenging and had expected their learning partners to deliver more of the evaluation activities. However, since the evaluation had a limited budget, they needed to carry out some of the research themselves. This was challenging as they were a small team trying to develop and deliver their innovation. Instead they appointed a student volunteer to carry out some of the research for them and made arrangements for their in-house data analyst to work alongside her. The student was able to use the research for her PhD and the staff team got some extra capacity. It was an imperfect solution as the student was not experienced in research but it allowed Library of Things to both increase its evidence base and its evaluation skills while keeping the impact on its resources manageable. Library of Things team concluded that the upshot to taking this approach was that “Library of Things data analyst Mirela has developed new skills in impact analysis and now feels more able to lead on this work”.

2. Having the necessary skills and knowledge to commission or undertake evaluation

Nesta_Web_CIM_Thumbs_EXTRA.png

Challenge Doing evaluation well takes knowledge about which approaches work in which situations and how to apply them, evaluation also comes with its own terminology and a lot of the jargon that surrounds it can be off-putting to early stage innovations that haven’t encountered the evaluation world before. Among the grantees we supported there were varying levels of knowledge of research and evaluation.

Success Nesta’s approach to funding evaluation activity includes the possibility of evaluators acting as ‘learning partners’ in order to build the skills and capacity of smaller innovations so that they can not only deliver evaluation activity during the term of their grant but develop the skills to allow them to carry out learning and evaluation activity during and beyond the funding. The Communities that Care project run by Neighbourhood Watch was one of the early stage innovations who appointed a learning partner. As part of the contract, the learning partner trained the project team in recruiting for and carrying out depth interviews. The staff team went on to successfully recruit and complete nine depth interviews using their new skills. The Neighbourhood Watch team reflected that:

“Training in in–depth interviewing would not have been something we would previously have thought necessary, but the input from our evaluation partner about the importance of this and the feedback from those who participated in the training, confirmed that the skills they were trained in were necessary to conduct effective qualitative interviews. The training gave our Community Engagement Manager the confidence to conduct the interviews with the beneficiaries and she found this a rewarding experience personally as she was able to gather positive feedback about the impact of the project she had been running as well as some valuable learning for future projects.”

3. Finding the right evaluator for your budget

Innovation-Methods-impact-investment.png

Challenge Finding an evaluation partner to either carry out the work on your behalf or to support you in carrying out evaluation activities can be challenging. Not only do you need someone with the necessary skills and experience, but they also need to be someone that you can work closely with successfully. When you’re evaluating an early stage innovation your budget is likely to be lower too so cost is key.

Success Nesta always supports its grantees to identify and appoint an evaluator. For the evaluation of Tutorfair On-Demand, the staff team initially approached an evaluation team from a leading university. However, a pilot of the evaluation team’s suggested approach identified challenges and the Tutorfair Foundation decided to look for a new evaluator. With Nesta’s help it appointed The Social Innovation Partnership (TSIP) who were able to develop an approach and a final report that Tutorfair was happy with. While the Tutorfair team felt that the process of identifying and working with an evaluator had been time consuming, they felt it was worthwhile.

“Although it was quite time-intensive, it was undoubtedly less time-intensive than evaluating the project ourselves, and we felt it was beneficial in several significant ways to collaborate closely with an external partner throughout the grant cycle.”

The Tutorfair team

Early stage innovations case study findings

  • The evaluation of the Tutorfair Foundation’s Tutorfair On-Demand project which offered one-to-one maths tuition through an app found that, in general, the app increases access to GCSE maths tuition. It also found that teachers and experienced tutors were positive about the app and the quality of tuition and students were also positive about tuition quality, although not always to the same extent.
  • The evaluation of Library of Things, a lending library which was piloting in South London before rolling out to other areas, found that Library of Things increases access to low cost, high quality items, it enables people to develop skills; become more community focused; and more environmentally minded.
  • The evaluation of Neighborhood Watch Network’s Communities that Care project found that the project demonstrated the effectiveness of a locally led, volunteer driven programme in addressing both older people’s experience of fraud and the anxiety it causes. It made a difference to the communities that it operated within. Volunteers were enabled to deliver effective fraud prevention advice, and the partners and stakeholders recognised the value of the work.

Useful resources

  • For more examples of how evaluation can help organisations, take a look at Evidence for Good from the Alliance for Useful Evidence.
  • For guidance on developing a theory of change take a look at New Philanthropy Capital (NPC)’s theory of change in ten steps.
  • For advice on building an evaluation framework and approach, NPC’s Understanding Impact toolkit may be useful.
  • For an overview of how to approach proportionate evaluation this guide from NPC outlines some of the considerations.
  • For more guidance on evaluation techniques see these helpful guides from Inspiring Impact.
  • For a more detailed overview of evaluation including an overview of experimental and quasi-experimental impact evaluations the Treasury’s Magenta Book is a useful guide to all aspects of the evaluation process.

For general guidance on using research evidence see this guide from The Alliance for Useful Evidence. The Alliance, a network hosted by Nesta which champions the smarter use of evidence in social policy, also produces a range of reports and resources to support learning and evaluation.

Authors

Sarah Mcloughlin

Sarah Mcloughlin

Sarah Mcloughlin

Senior Programme Manager

Sarah was a Senior Programme Manager.

View profile
Carrie Deacon

Carrie Deacon

Carrie Deacon

Director of Government and Community Innovation

Carrie was Director of Government and Community Innovation at Nesta, leading our work on social action and people-powered public services.

View profile
Annette Holman

Annette Holman

Annette Holman

Programme Manager, Government Innovation Team

Annette worked in the Government Innovation Team at Nesta focusing on social action priorities, specifically on the Connected Communites and Second Half Fund.

View profile

Naomi Jones

Naomi is an independent social research consultant who works with organisations to support their delivery of research and evaluation to help them to use evidence effectively for change.