Use case: NHS Testbeds – using testbeds to make health services better and more efficient

  • Geography: England
  • Facilitators: Department of Health and Social Care, Office for Life Sciences, NHS England
  • Timeframe: 2016 – ongoing

The primary purpose of NHS Test Beds in the UK was to improve services delivered by the NHS and make them more cost-efficient through using technology. It was also important for the programme to contribute to changing the culture of innovation within the NHS – a large public sector organisation, which by nature can be difficult to drive and implement innovation across services.

The programme is organised into two-year-long ‘waves’ to enable learning and evaluation along the way. The first wave commenced in 2016 and included seven testbeds across England with 40 innovators and over 250,000 patient participants. The technologies tested included predictive algorithms to manage patients at risk of developing conditions, aggregation of data to improve clinical decision-making, and technology to monitor risk of crisis in clinical pathways at individual homes or care homes. Testbeds brought together partners of senior government officials, academia, industry, patient groups and charities.

Key success factors and lessons

  • Evaluation is a fundamental part of the testbed: A thorough evaluation provides robust evidence of what works well and areas of improvement across core objectives. The NHS evaluated whether the intervention improved patient outcomes, lowered health care costs and supported partnerships. Evaluation was necessary to create evidence that eased the process of potentially adopting the innovation across the NHS nationwide, and enabling improvement of the process itself in future waves of testbeds. A handbook was created for further learning from Wave 1.
  • Make learning easy: Structuring the testbeds into two-year waves allowed the learning from the first wave to be implemented into the second wave. Handbooks for data management and evaluation were among the tools created to ensure that lessons learned were disseminated and incorporated.
  • Finding and coordinating the right partners and collaborators takes time: Setting up the testbed took longer than anticipated, both from the private and public sector point of view. The following elements were identified as particularly time consuming: i) getting the core team in place, ii) determining terms of collaboration and shared objectives, iii) information governance and decision-making, and iv) testing and evaluation.
  • The importance of quality of leadership and ownership: Finding local bodies with individuals who were sufficiently engaged and interested in implementing digital technologies was core to the success of the testbeds. Ensuring quality of leadership with experience and resources to fully engage with the testbed was also essential. Testbeds that were chosen without fully fulfilling this criterion struggled the most.
  • Clarify and reconcile the (often competing) aims of stakeholders: With a range of stakeholders from diverse sectors involved, competing objectives were an issue. While the firms aimed to access data and verify their products, others wanted to use the testbeds to incentivise innovation from small and medium-sized enterprises, or help local NHS bodies with little tradition of research and development work to ‘catch up’. It was vital to clarify which aim was most important and stick to it throughout. Managing the number of stakeholders and number of technologies tested supported clarity of the aims.