Assessing the impact of accelerators: What can you learn from academia and think tanks?
We delved deep into academic and think tank publications to provide you with five lessons on measuring the impact of accelerators
Assessing the impact of accelerators: What can you learn from academia and think tanks?
The rise in the number and format of business accelerators around the world has been accompanied by a growing interest in understanding the impact of these programmes by academics and think tanks (e.g., The Startup Factories).
While there is no shortage of news articles highlighting whether business accelerators work or not (here, here and here), we still lack robust evidence on where and how accelerators are adding value. Whether you’re a researcher or a programme manager at an accelerator, knowing how to effectively and accurately assess the impact of your accelerator’s activities is critical to drive progress and achieve your targets. With the aim of closing this gap, we delved deep into academic and think tank publications to provide you with five lessons on measuring the impact of accelerators.
1. Tracking non-successful applicants
Imagine you are an accelerator manager, and it has been one year since cohort X graduated. At the request of one of the programme funders, you’re crunching the numbers to see whether participating startups have increased customer traction, and, to your delight, you find that 75 per cent of the startups have seen 100 per cent or better growth since graduation! But you begin to wonder, how much of this growth was a result of the support offered by your programme? Would the growth had been the same had the same businesses not gone through the programme?
Accelerators are typically highly selective in the startups they accept into their cohorts, paying particular attention to their growth potential. As such, startups that are accepted into accelerator programmes are likely to be more successful than those that are not accepted, regardless of whether or not the programme itself added any value. With this in mind, we can conclude two things:
Participating startups may have been as successful without your accelerator support.
Participating startups may be significantly different (better) than non-participating startups, so making a comparison among them will not tell you much about the value added by the programme itself.
Researchers have addressed this issue by comparing graduates of specific accelerator programmes with a set of startups that did not participate in an accelerator programme, but are very similar across several metrics such as pre-accelerator funding, founder experience, sector, and founding year (referred to as propensity score matching). Along similar lines, others have leveraged accelerators’ applications ranking to compare startups that were just above with those that were just below the acceptance cut-off point, which assumes that there is no discernable difference between the accepted and non-accepted startups (referred to as regression discontinuity analysis, as per this study).
We realise that accelerators may not have the capacity or technical ability to conduct this type of analysis. One way forward would therefore be to contribute data to third-party studies, such as Emory University’s Entrepreneurship Database Program, which, since 2013, has collected data from more than 15,000 accelerated and non-accelerated startups to track the overall impact of accelerators. If you are based in the UK, we also encourage you to contribute to our research project, which looks at the impact of accelerators and incubators on startups in the country.
2. Choosing the right metrics
Assessing impact requires metrics, and previous Nesta research has illustrated many of the commonly-used performance indicators for business incubation. This may be a formal requirement for many programmes (e.g., incubators in receipt of ERDF funding). However, as we have discussed previously, the diversity of missions among accelerators means that importance attached to different success metrics will invariably differ.
Deciding on a few appropriate metrics should be guided by your goals and resources. In the selection process, you should also be wary of picking metrics that can incentivise poor behaviour. For example, in her review of incubators, Nicole Dee warns that job creation can actually conflict with the view of investors who often recommend investee startups to spend conservatively, which could translate into postponing recruitment.
Through our research, we’ve identified more than 40 possible quantitative metrics for assessing the impact of an accelerator on a variety of outcomes (see below for a sample of these). To determine which ones are the most appropriate for your programme, we suggest that you ask yourself the following questions: What are your accelerator’s goals? Which metrics will enable me to assess whether the accelerator is reaching those objectives? Do I have enough time and financial resources to collect data for these metrics? And if not, is there something else I can use as a reasonable proxy?
Of course, this list is not all-encompassing. Accelerators focused on a particular sector may want to add sector or programme-specific indicators (e.g., number of patents filed and/or granted). Additionally, if you are interested in measuring long-term trends, you may want to collect data at 3, 6, or 12-month intervals. At the end of the day, your choice of metrics should be guided by what you seek to do with them.
See Nesta’s Incubation for Growth ( Appendix A) and Aspen Institute’s Guidelines-Metrics & Milestones For Successful Incubator Development (pages 24-29) for a comprehensive list of metrics that have been used to assess business support systems.
3. Surveying participants
Understanding whether participating startups are satisfied with your services can be a critical aspect of making your programme more effective. In a 2014 study, the Aspen Network of Development Entrepreneurs (ANDE) asked 54 startups to rank 27 common services (e.g. access to peer mentoring, business plan development, links to strategic partners) offered by eight impact-focused incubators/accelerators, pre- and post-participation.
Along similar lines, to assess the value-add of accelerators, Jad Christensen asked startups to determine how much equity they would have given accelerators if they did not provide any funding. Finally, Hallen et al. conducted in-depth qualitative interviews with entrepreneurs to understand what services or activities they found most useful, what they learned, and how they learned it.
Employing a similar tactic (whether it is anonymised surveys or one-on-one/group interviews) and comparing the results can not only help you tailor your activities at the beginning of the programme, but also improve the experience for future cohorts.
ANDE’s benchmarking framework can be found on pages 39-45 of the report. For methods on measuring the performance of your accelerator, we also recommend InBIA’s Metrics for Entrepreneurship Centers (pages 16-23).
4. Breaking down your accelerator into small components
Finding out that your accelerator has an impact is great, but knowing what exactly is driving that success allows managers to improve their programmes (as well as potentially better understand how to scale them effectively). In collaboration with Start-Up Chile, Gonzalez-Uribe and Leatherbee found that startups which receive basic accelerator services (i.e., cash and co-working space), as well as entrepreneurship schooling, outperform those that only receive basic accelerator services.
Meanwhile, in a study by GALI that examined 15 Village Capital accelerators, researchers found that low-performing programmes (as measured by the revenue and investment growth of participating ventures) tended to dedicate more time to programme-related activities compared to high-performing ones.
We bring up these examples not to encourage you to carry out sophisticated statistical analysis. Instead, we want to incentivise you to test your accelerator approach (in the spirit of A/B testing), which could help you prune out what doesn’t work and push forward what works. To assess your activities in a rapid fashion, short qualitative surveys are a great methodology (see pages 33-36 of InBIA’s report for sample survey questions).
5. Collaborating with academics and think tanks
If you think you cannot do it alone, reach out to academics and think tanks who are leading the evaluation of accelerators. Most often than not, the underlying motivation of these researchers is for their work to influence your actions and behaviours.
As we mentioned earlier, we have just started a project to evaluate the impact of accelerators on startups in the UK. To accomplish this, we will be collecting data on accelerated and non-accelerated startups. The final output of our project will be a report and a series of blogs that distill what works in accelerator support. If you want to join our project or have any questions on measuring the impact of your accelerator, don’t hesitate to contact us at [email protected].
We hope this list inspires you to take a more data-driven approach to evaluating the activities of your accelerator. We acknowledge that assessing your impact can be time-consuming; however, as the number of accelerators rises and new forms of startup support emerge, we believe that allocating time to measuring your services is critical to maximize the value of your offer and distinguish yourself from the pack.