How artificial intelligence can improve decision making in healthcare

Health organisations around the world are actively investing in artificial intelligence (AI) to revolutionise healthcare services. In the UK, the Government has allocated £21 million in funding for NHS trusts, enabling them to bid for financial support and facilitate the implementation of AI healthcare tools. 

Various algorithm-driven platforms already exist to support clinicians, including triaging, radiology and imaging. These technologies could potentially address many other challenges faced by the NHS, from enhancing medical training to improving diagnosis and the treatment of disease. 

But to fully benefit from these advances it is crucial to consider the risks involved with using AI to safely implement these tools within healthcare. Organisations like the World Health Organization (WHO) are appealing for caution when adopting AI tools, particularly large language models (LLMs) such as ChatGPT. They highlight the risks associated with biased data, misinformation and privacy concerns. 

Most public discussion about these risks has focused on regulatory or technical fixes, or abstract discussions about how to align technology with public values that lack the context of real-world scenarios. For example, the ethical implications of using an AI system to provide automatic diagnoses to patients would be different from a system that suggests potential diagnoses to clinicians.

As a result, we need to shift towards including healthcare professionals in these discussions - applying their frontline experience within the development of AI tools.

What did we do?

Health practitioners are uniquely positioned to understand the areas where AI tools can have a meaningful impact and how they can augment human expertise.

In the HACID project, we are developing a tool that combines AI and the collective intelligence of doctors to support better diagnoses. We are also exploring if involving healthcare professionals throughout the development process improves the tool or makes it more useful, for example, by increasing trust in AI among clinicians.  

What did we learn?

We started by speaking to healthcare professionals to hear their perspectives on existing bottlenecks where AI could make the biggest difference. Clinicians agreed on three key challenges that could be addressed by AI tools:

Patients may have multiple health conditions, medications and/or a long medical history.

A primary care clinician (first point of contact for patients seeking medical care) told us "There are so many dimensions to health - you give medication for depression, but you don’t understand how that’s impacting something else (eg, their stomach)".

These patients are challenging because their symptoms may be associated with an existing health condition or a side effect of their current medication. Additionally, they may have multiple health providers preventing carers from having a holistic view of the patients' health.

AI tools are well-positioned to ingest large amounts of information (eg, medications, symptoms, medical history) and make recommendations or flag risks that can aid clinical decision-making

Some patients require specialised expertise or clinicians from several different specialities to work together to decide on the most appropriate care. 

This challenge was particularly common among primary care clinicians. They reported very limited access to expert advice and felt under pressure to make the right diagnosis and refer patients to the right specialist. Similarly, secondary care clinicians go through a slow process to seek advice from other specialists (eg, ‘up to three months for dermatology’), which can delay care for the patient. 

AI tools can improve access to rare expert knowledge, through smarter knowledge management and recommendation systems. 

Some biases or errors might be due to clinicians’ personal experiences while others might be the result of systemic failures, eg, biases in the training healthcare professionals receive.

We have all heard about the risks of biased AI systems, but humans also make assumptions or biased judgements based on their own experiences. A clinician explains that “Doctors may also have a bias when they are doing clinical judgements - someone walks in and I can see that they are homeless and make a judgement that a rash is a product of their lifestyle. This could mean ignoring an important symptom.”

Clinicians are under time pressure and need to make decisions quickly which can cause errors. They may also unconsciously introduce bias through judgments based on visual cues. This bias can lead to overlooking crucial symptoms, affecting the accuracy of diagnoses and potentially compromising patient care. 

AI tools could help counteract human bias to ensure fair and comprehensive assessments, fostering more equitable and effective healthcare outcomes.

Conclusion

As health organisations and practitioners navigate the use of AI in healthcare, it is crucial to actively involve clinicians in its development and deployment. Building on our previous work in participatory AI, we offer four guiding rules that can help practitioners safely implement AI in healthcare:

Start by speaking to patients and healthcare professionals to identify the specific problems that they face. Discuss with users what challenges may be addressed with AI tools and the potential limitations. This can improve uptake of the tool in the long run.

Work out what users want to get out of the AI system early on. Aim to elicit user values during the early stages of the project to understand what ‘good’ looks like for them. For example, we learned that healthcare practitioners valued “understanding the confidence of a recommendation” the most.

Healthcare is a high-stakes domain, therefore it is crucial for clinicians to calibrate their trust in the system and account for errors. This insight could lead to changes in the AI model or the user interface.

Consider how the system will fit into or replace clinicians’ existing strategies and decision-making aids. Exploring how the tool could enhance the patient-clinician relationship or collaboration between healthcare teams means that the value and utility of the tool is multiplied. 

All technology development needs to go through multiple iterations before it can be deployed. Involving different people in testing and evaluating the tool could help to mitigate against the worst risks but remember that errors will still occur. Therefore,  design the system to provide for ongoing accountability mechanisms.

These ways of working will bring us closer to a roadmap for safely applying AI within healthcare. By developing technology with patients and clinicians, we can ensure that AI becomes a helpful companion, boosting human skills and improving patient care worldwide.

Author

Rita Marques

Rita Marques

Rita Marques

Collective Intelligence Designer, Centre for Collective Intelligence Design

Rita is a collective intelligence (CI) designer, helping to apply the CI design process to environmental and health projects.

View profile
Aleks Berditchevskaia

Aleks Berditchevskaia

Aleks Berditchevskaia

Principal Researcher, Centre for Collective Intelligence Design

Aleks Berditchevskaia is the Principal Researcher at Nesta’s Centre for Collective Intelligence Design.

View profile