AI applications in healthcare: privacy and bias

www.nesta.org.uk/report/chinas-approach-to-ai-ethics/ai-applications-healthcare-privacy-and-bias/
Skip to content

As illustrated by Andy Chun’s essay on AI in public healthcare services, China is rushing ahead in deploying AI technology to deliver consistent, higher-quality medical care to more than 1.4 billion citizens. Such rapid and wide-scale use of AI requires near-constant collection, storage and analysis of large personal data troves. These pervasive and data-intensive operations have raised enduring public concerns about the privacy of patients during and after treatment and the security of their data. The sensitivity of public health information has long been a focal point for government action, but has yet to be fully addressed. With mounting public pressures over frequent medical data leaks affecting tens of millions of patients, we can expect this issue to continue capturing a significant portion of the applied AI ethics discussion in China.

While this ethical issue is not unique to China, the country’s strong push to apply AI to the world’s largest healthcare system (by patient numbers) does carry distinct implications by virtue of its size. There are no clear differences in the ethical discourse and approaches taken by western and Chinese entities in this regard, with the exception of China being more proactive in curbing privacy breaches under its aforementioned regulatory frameworks by developing and enacting national laws and actively monitoring and punishing violators.

An additional, albeit less central, ethical concern relates to bias. While bias within Chinese AI use in healthcare is still in early discussion stages and mostly applies to the urban-rural divide, a study has highlighted the bias that exists in foreign systems used to service Chinese patients. An example of that is the trialling of IBM’s Watson for Oncology, mentioned in Andy Chun’s essay, which was trained on western datasets. The discrepancies between the western and Chinese datasets led to some biased system recommendations that did not apply to Chinese patients. This joins other western-trained technologies that present such biases, such as iPhone’s facial recognition software not distinguishing between different Chinese faces. This is a more general issue, with people of Asian descent being among the underrepresented groups in western training datasets.

The ethical question surrounding bias in AI is often discussed globally, but is relatively new to internal Chinese ethical discourse. With a largely homogenous population, Chinese applications of AI in public services have made significant attempts to alleviate biases in other aspects of diversity, such as dialects, but this is typically as far as it extends before venturing into broader urban-rural divides, which fall under ongoing discussions on equity and equality.

Authors

Danit Gal

Technology advisor to the UN Secretary General’s High-level Panel on Digital Cooperation and associate fellow at the Leverhulme Centre for the Future of Intelligence at the University …