AI applications in smart cities: privacy and accountability

www.nesta.org.uk/report/chinas-approach-to-ai-ethics/ai-applications-smart-cities-privacy-and-accountability/
Skip to content

As Eugeniu Han demonstrates in his essay, AI-enabled smart cities are abundant in China, which has about 50 per cent of the world’s smart cities in-the-making. However, considering that residents’ facial information is collected to board subways, sort trash and enforce recycling, and obtain toilet paper in public toilets, this raises serious privacy concerns. The use of facial recognition and biometric identification to commute, pay for transactions and use public infrastructure is powered by concentrated and vulnerable public data treasure troves. It also establishes a culture of pervasive monitoring of citizens. Recently, China also came under fire for using facial recognition in Xinjiang for racial profiling-based monitoring.

Justice scales

While this elevates the ethical issue of privacy breaches via smart infrastructure monitoring in China to new heights, it is, again, a concern that extends beyond China. Many other countries are using facial recognition in their smart infrastructure for racial profiling-based monitoring. However, what once again makes China different is the magnitude and scope of operations. While privacy laws in China protect citizens’ data from abuse by non-governmental actors, they do not limit the government’s access to and use of private data. In this regard, China is lagging behind other countries seeking enhanced user privacy while balancing security concerns.

China is also unusual in its attempt to co-create smart courts that automate case handling, as Eugeniu Han describes in Hangzhou. It raises the ethical concern of accountability given the role fulfilled by the AI application in holding others accountable. With smart courts and AI judges making opaque decisions that directly affect humans, we are also left with the question of who is accountable for the AI systems’ actions? In addition, with these courts handling cases where Alibaba, the defendant in most cases, is also the co-creator of the smart court, questions of conflict of interest arise, further exacerbating legal accountability for decisions made by using these systems.

The use of AI to support court systems is not unique to China, but the instance of a smart court and AI judge handling claims against a corporate actor, while also being developed by that very same corporate actor, is. With China’s move to advance the rule of law throughout the country, the partial privatisation of smart courts and AI judges highlights the ethical issue of questionable accountability for such legal actions carried out by opaque systems.

Authors

Danit Gal

Technology advisor to the UN Secretary General’s High-level Panel on Digital Cooperation and associate fellow at the Leverhulme Centre for the Future of Intelligence at the University …