Skip to content

In 2018, a global IT company will buy into a healthcare provider, to use it as an engine for creating algorithms for health.

Artificial intelligence (AI) is looking increasingly like the transformative technology of our times. Cutting edge learning machines are being trained to drive a car, recognise a face in a crowd, and diagnose diseases; capabilities that were out of reach even a few years ago. In an age when vast rewards and adulation can go to those who solve relatively trivial problems such as ordering a taxi, or sharing a photo, AI could be a genuinely radical technological leap.

In healthcare, AI is already showing some impressive capabilities - reaching clinical levels of competence for some diagnostic tasks such skin cancer and diabetic retinopathy. It can find patterns in data that people alone cannot perceive - for example, diagnosing Parkinson's via an audio recording or cancer via molecules in breath. And, in what could be one of its most dramatic impacts, it is opening up the clinical expertise available in developing countries that struggle to recruit professionals with the right skills - for example, to improve diagnosis of malaria and tuberculosis in Uganda.

An optimistic but plausible future would see AI delivering early diagnosis, efficient triage, better real-time monitoring of symptoms, and new insights into the personalisation of care. It could take pressure off the health system, and help people be more in charge of their health and care.

As a result the potential prize for AI companies is particularly big. The last few years have seen enormous investment in healthcare from Google, Apple, and venture capital funds, all aiming to be the leader in this area.

However, the limiting factor on progress at present is not a lack of investment, or the technology itself, but a shortage of data needed to train the machines. Three factors are holding back progress:

1. The data we already have is hard to access, constrained by privacy legislation, wider concerns about the use of personal data and institutional risk aversion.

2. The data quality is poor. AI needs good quality data, and much of the data that the system holds is inaccurate or poorly coded.

3. It’s the wrong kind of data - AI has made much of its early progress via rich data sources such as images and audio recordings. While some of the data in medical records are of this kind, much of it is in summary form - a far less fertile source of data.

As a result there is a scramble for data. This scramble is made even more frantic by first-mover advantage. Google maintains its leadership in search due to its user volume and consequently the amount of data it has to improve and perfect its algorithm. Similarly, successful clinical AI will be fed data to analyse, which will also help it improve. The company that first establishes a strong market position will be hard to dislodge.

Tech firms are already signing partnerships with established hospitals and healthcare data owners across the globe (Deepmind with Moorfields, IBM Watson with Sloan-Kettering and Alder Hey, Amazon with Cerner, and many others). But these relationships can be difficult to manage. Deepmind’s partnership with the Royal Free Hospital to create an app for diagnosing acute kidney injury was found to be unlawful and prompted furious headlines. Deepmind’s response to the ruling blamed struggles with regulation and bureaucracy: “We underestimated the complexity of the NHS and of the rules around patient data.” Furthermore, these partnerships do not typically put the technology company in a position to overhaul the quality and scope of the data collected.

A direct and aggressive solution to this situation would be for a big tech company to buy a healthcare provider. This would make it much easier for the company to gain consent from patients where necessary, ensure proper data quality, and make sure the right raw data is recorded. And these companies certainly have the financial firepower - Apple, Google and Microsoft are sitting on almost half a trillion dollars of cash between them.

There are also benefits beyond immediate access to data. It would help tech companies understand the complex set of issues around integrating human expertise with that of machines, eventually accelerating adoption of these technologies. It could also be used to encourage patients to collect data at home via wearables and other devices, increasing the data flows and understanding of new forms of data and data collection.

We are not arguing that this is simple or straightforward. Along with these benefits come significant risks to both citizens and corporations - both of which need to have nuanced and thoughtful resolutions.

For corporations taking on and managing a healthcare provider is complex in itself - operational, legal, and regulatory concerns make healthcare a complex business. Further, the public can react strongly against the exploitation of their data - as the incident in the UK made clear. Any compromise - or perception of compromise - of patient interests or care to advance the interests of corporations, would be a scandal that could limit further development of the technology. For example, if clinical staff were encouraged to use AI that turned out to give unreliable outputs.

However, these latter issues - public perception of data exploitation and of the safety of AI recommendations - must be resolved for AI to progress. A tech giant could well decide that the fastest route to this resolution is through owning a provider, giving direct interactions with consumers, and with direct influence on clinical workflows, rather than working through partnerships, even though the reputational risk is somewhat magnified. Further, the kind of healthcare provider in question makes a difference. A primary care provider offers access to complete medical records and plenty of contact with patients without some of the institutional complexity and clinical risk of a large hospital. And some consumers in the UK already use apps such as Babylon Health and Push Doctor to access primary care - so tech companies are not inherently off-putting as brands in this area, at least not to some segments of the market.

Apple has reportedly been interested in bricks and mortar healthcare for some time. In October, CNBC broke the news that Apple had been negotiating an acquisition of a startup called Crossover Health, which works with big employers to build and run on-site medical clinics, and One Medical, a US-wide primary care group. In both cases talks did not work out.

This would mark a significant milestone in the increasing private control of key personal data. It raises a host of questions, including:

  • Transparency and privacy. To what extent will people in this scenario understand and be happy with the way their data is being used, and the consequences for their privacy?
  • Duty of care. Could the pressure to gather and use data conflict with the patient’s best interests?
  • Value. Are people getting a fair deal for their data, either as individuals or collectively?
  • Impact on research. Longer term, would the research community and other publicly-funded institutions be shut out from the most scientifically valuable data sources?

We are, broadly, optimists about the potential of AI but the risks are significant. These would be mitigated within a well designed regulatory regime, but globally, decisions about principles, trade-offs, and rules in this area have been slow to emerge. Corporations will not wait forever, and we expect at least one company to be bold enough to take this step in 2018.

Illustration: Peter Grundy