Using big data in preventative healthcare

Big data refers to very large data sets that may be analysed computationally to reveal patterns, connections and insights. Its positive role in healthcare is well established with big data being using to prevent epidemics, improve hospital flow and diagnose cancer quicker.

But its use in preventative healthcare is less well explored. Traditionally public health information has been delivered at a population level but the recent announcement from the health secretary, Matt Hancock that the NHS will trial “predictive prevention” technology using a combination of medical, social media and smartphone data to provide nudges designed to prevent illness, is set to change that.

There is significant untapped opportunity in pulling together diverse data sets that can predict the chances of someone getting ill based on their behaviours. Our smartphones already track our physical activity. Shopping basket data could be used to support people to make healthier eating choices. Research shows that online search results and social media posts can predict mental health issues before someone presents at a clinical setting. Amazon, Google and Facebook probably know more about your habits and behaviours than your GP does. Combining personal data sets could allow for targeted healthy living messages and, hopefully, support and access to interventions.

The health secretary’s idea is not without its challenges though. The first issue is safeguarding the data. Care being withheld by the NHS because of a person’s lifestyle choices would be unacceptable and we don’t want to be in a situation similar to the US where insurance companies are gearing up to base their premiums on people’s shopping data.

A second challenge is algorithmic bias. Deep learning algorithms like the ones that would power this new technology are dependent on lots of data. However, the quality of the algorithm depends of the quality of the data used and also the design of the artificial intelligence (AI). Badly designed AI and algorithms that penalise certain groups or make decisions that could have negative consequences are a risk.

Lastly, access to support is essential. Targeted healthy living messages are not very helpful if they are not accompanied by access to interventions such as smoking cessation services, weight management courses or mental health support - all public health services that are under strain from budget cuts. We also know from our work on Good Help that getting the right kind of support that helps people identify their own motivation for change and develop the confidence they need to act, can be critical to success and have a dramatic impact.

We have three pieces of advice for Mr Hancock if he wishes to roll out his predictive prevention idea successfully:

1. Let citizens control their data

To get around concerns about misuse of data, the NHS should support people to understand the benefits of sharing their personal data to provide insights that can help them improve their health and wellbeing. The data needs to be controlled by the individual and have clear consent models and mechanisms for who they share it with. Safeguards need to be put in place so people are not penalized for their lifestyle choices but are supported to make better ones.

2. Use People Powered AI

The AI driving the personalisation needs to be people powered and ethical. AI should not be used in ways that exacerbate health inequalities, particularly those who face the most challenges and disadvantage in relation to their health and wellbeing. Algorithmic bias must be designed out of the system by being designed by data scientists that represent a diverse population. AI is growing in complexity and autonomy but this system must embrace Explainable AI where the rationale for the decision making can be understood, questioned and held to account. There must be clear articulation from the NHS about ‘‘who benefits’ and whether the benefits are deemed ‘fair’. The Government’s proposed Centre for Data Ethics and Innovation should take lead role in this.

3. Start with controlled tests in real-world conditions

When dealing with complex systems, we argue that it’s no good ‘proving’ things are safe or reliable in controlled conditions that are nothing like the context in which these technologies are going to be used. They must be able to test more complicated feedback cycles and data inputs, with measurable impacts on people’s choices, behaviours and interactions with services. In our report on People Powered AI, Confronting Dr Robot, we suggest real-world experimentation of AI in designated test sites, with non-AI comparators, to understand how AI works in complex systems before wider take-up ‘in the wild’.

Perhaps most importantly, effort must be made to give people the assurance they need to have trust in the technology and wider system. From getting the right regulatory framework in place to making sure that the consent processes are meaningful - these details will be vital to ensuring that this proposal is successful.

Author

Sinead Mac Manus

Sinead Mac Manus

Sinead Mac Manus

Senior Programme Manager, Digital Health

Sinead was a Senior Programme Manager for Digital Health in the Health Lab.

View profile