Skip to content

Artificial intelligence (AI) could transform healthcare, with potentially huge benefits, but it also runs the risk of creating an inhuman system.

Key findings:

  • The first impact of AI is not likely to be replacing doctors, but advice and triage, where there is great need but few solutions.
  • This puts AI between doctors and patients - a hugely significant role:
    - Done well it could empower patients and make the healthcare system more sustainable;
    - But done badly it could be unsafe and take control away from patients and doctors.
  • Maximising benefits and minimising risks means engaging patients, disciplined testing in real world conditions, shaping the market, and increasing understanding of the technology.

Artificial intelligence could become part of the front door to healthcare. It could make the health system simpler, more accessible, more responsive, more sustainable, and give patients more control.

But there’s a risk that the public could experience it more as a barrier than an open door, blocking access to care, offering opaque advice and dehumanising healthcare in every sense.

There is currently a window of opportunity to put in place measures that ensure the technology develops into 'People Powered AI'; supporting care that is simple, gives patients control, is centred around an equal dialogue, is accountable and equitable.

Policy recommendations:

  • Public and clinical scrutiny: Involve citizens and clinical professionals in the upstream design, development and implementation of the technology. This should include the requirement of mechanisms - such as public panels made up of citizens - that ensure technology development and implementation takes account of the demands and perspectives of citizens and healthcare professionals, and ensures that People Powered AI principles are applied.
  • Controlled tests in real-world conditions: Enable real-world experimentation of AI in designated test sites, with non-AI comparators, to understand how AI works in complex systems before wider take-up ‘in the wild’.
  • Proactive market design: System leaders actively engage in market design to maximise public benefit and ensure a plural market with genuine choice. This should include regulation that is upstream and proactive (‘anticipatory regulation’), clarity over who owns both algorithms and data, and requiring adherence to key design principles, such as People Powered AI principles. Market design should also foster a diversity of new entrants to the market including procurement processes that work for smaller companies and market structures that support a diverse range of R&D activities.
  • Decision-makers equipped to be informed users: Create a new cadre of public leaders and decision-makers with the technical skills, authority and institutional levers to scrutinise, manage and deploy AI in a responsible way. This should include incorporating artificial intelligence into medical education and health management training to enable the frontline workforce to be informed users of the technology.

Part of
Health Lab

Authors

John Loder

John Loder

John Loder

Head of Strategy

John works in the Health Lab. He has a particular interest in the potential of data to improve healthcare, and leads Nesta's digital health work, such as Dementia Citizens He co-wrot...

View profile
Lydia Nicholas

Lydia Nicholas

Lydia Nicholas

Senior Researcher, Explorations

Lydia was a senior researcher in Nesta’s Explorations team, focusing on how minds, systems and technologies come together to perform better at complex challenges with particular focu...

View profile