About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

As China devotes significant efforts to realising its goal of becoming a leading global player in artificial intelligence (AI) technology, the question of which values will be reflected in and furthered by these new capabilities has rapidly gained prominence.

In western countries, these questions relate to how China’s government might use AI to enhance a digital authoritarian state or to expand surveillance of its citizens. Yet within China, the ethics and governance of AI have also received considerable attention. In the spring and summer of 2019, several companies, business associations and expert groups released principles and policies for the ethical use of AI, which usually highly resemble similar documents in western countries. The New Generation AI Governance Expert Committee, convened by the Ministry of Science and Technology, declared AI should ‘conform to human values, ethics and morality […] should be based on the premise of safeguarding social security and respecting human rights, avoid misuse, and prohibit abuse and malicious application’. The Beijing AI Principles, formulated by the Beijing Academy of Artificial Intelligence, state that ‘human privacy, dignity, freedom and rights should be sufficiently respected’. These principles are explored in more detail in Danit Gal’s essay in this collection.

To foreign observers, such language appears counterintuitive. China is not usually seen as a staunch defender of human rights or civil liberties, and its use of AI in repressive surveillance programmes is widely reported. How, then, does one make sense of these initiatives? Are they merely a hypocritical charade intended to cover up the unsavoury exercise of power by an autocratic regime and ensure Chinese businesses remain palatable abroad, or do they reflect a way of seeing the world – and the role of digital technology in it – in a different manner? As a set of high-priority technologies with considerable political, economic, social – and international – impact, debates surrounding AI governance are a microcosm of the questions about China’s future continually addressed by its leaders. What goals should the technology achieve? Where will it be encouraged, tolerated or rejected? Who will have access to its levers, or the vast amount of data, and who will be exempt from it? This essay argues that Communist Party ideology is the best framing device to understand how China’s leadership intends to steer the development of AI. With specific regard to the deployment of AI in governance, the key ideological assumption is that social order is governed by an objective, external and intelligible set of ‘laws’ (guilü). Big data and AI technologies not only assist in better understanding these laws, they can also help in ‘engineering’ society to solve development problems. This, in turn, will help the Party achieve its utopian goal: the Chinese dream of the great rejuvenation of the Chinese nation.

'The key ideological assumption is that social order is governed by an objective and intelligible set of laws. Big data and AI technologies not only assist in better understanding these laws, they can also help in ‘engineering’ society to solve development problems'

This essay will elaborate this point in three sections. First, it will review the different components of China’s AI development plans and their driving motivations. Second, it will briefly sketch how Party ideology developed to the point that AI would become a top priority for the leadership. Third, it will discuss the inherent tensions within Party ideology that AI may well lay bare, including the tension between the desire for autonomous decision-making systems, and the necessity to maintain the primacy of Party authority.

Authors

Rogier Creemers

Assistant professor in the Law and Governance of China at Leiden University and associate fellow of the Hague Program for Cyber Norms, Institute of Security and Global Affairs