About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

As indicated in the previous section, both the AI strategy and the Party’s overall programme have three main pillars: economic growth, national security and domestic governance. The latter has, over the past decades, been mobilised in order to realise the two former.

Yet China is now more wealthy and powerful than it has been in recent history, and on the verge of achieving the first of two markers Xi Jinping put forward: the Party’s 100th anniversary in 2021. By this time, China should have become a moderately prosperous society, growing into a fully-fledged developed nation by the second marker: the centenary of the People’s Republic in 2049. If China manages to continue along this path, and leaves the crises of its past further behind, the question of what sort of society it wishes to be is now becoming more acute. Digital technologies and AI will lie at the centre of these debates.

At this point in the argument, it would be customary to reflect on the likelihood that AI might contribute to political liberalisation in China. However, such expectations are wide of the mark. At least within the foreseeable future, the position of the Party seems to be secure, and if fundamental political change were to come to China, chaos and disruption would be far more likely than successful democratisation. Instead, the interesting questions are how the Party could use AI to adapt itself to ever-changing circumstances, and how the development of AI might be constrained by its political context.

First and foremost, it is clear that the use of AI in the domestic security environment will be further expanded in the everlasting pursuit of social and political stability. Yet even if one takes the cynical view that everything the Chinese leadership does is aimed at consolidating Party rule, it must equally be recognised the leadership believes an efficient way of doing so is ensuring the population is, by and large, satisfied. This is often couched in economic terms and GDP growth rates, but perhaps equally important is the provision of social services to an ever more demanding populace. Under Xi Jinping’s ‘new normal’, the Party seeks to combine a gradually reducing growth rate with greater attention to public welfare and social development. AI will thus also be deployed to remedy problems ranging from an ageing population and shrinking workforce to the shortage of doctors and teachers in remote and rural areas. China’s giant digital corporations will be supportive partners, able to score political points and derive considerable income from helping the state to achieve its goals.

At the same time, two major issues will need to be addressed.

The first is the question of data quality and data protection. China does not have, nor is it likely to develop, an all-encompassing privacy right for individuals. It is in the process of developing data frameworks detailing which specific actors can collect, access and process which data. Yet this process has advanced slowly, as behind-the-scenes negotiations between businesses and related government departments continue to unfold. Businesses are, unsurprisingly, not keen to share full access to data with other businesses or government offices. They might lose their competitive advantage, see the risk of data leaks increase, and lose user confidence, and have thus often managed to stave off requirements for doing so. One reason business data is attractive is that it tends to be more reliable than government data, where incentives for corruption and abuse have proliferated for decades. As the success of any AI application depends on the quality of the underlying data, the success of projects in the public sector may remain below par for quite some time.

The second is the nature of decision-making itself. In many ways, China seeks to use AI and other digital technologies to automate governance, creating self-correcting social systems requiring far less government intervention and supervision than the present model. However, if this requires autonomous decision-making technologies, involving self-improving machine learning algorithms, it threatens the monopoly on political decisions the Party has reserved for itself since 1949. While this is of less importance in areas with clearly defined and generally accepted outcomes, such as healthcare, it may well become a source of tension in politically sensitive areas such as the judiciary.

Eugeniu Han touches on this when he considers System 206 smart courts in his essay on smart cities in this collection, and raises the question of the difficulty of challenging AI processes and AI-generated assessments. In contrast, keeping a human in the loop may mean the existing pathologies in the Party-state architecture are sustained. This issue also raises a bigger point about the relationship between AI and means and ends in governance. Based as it is on the idea of perfectibility, the engineering approach may work very well in those areas of social life where the ends are clear and broadly shared, and there is a direct, well-understood connection between inputs and outcomes. Here, AI could play a very important role in raising efficiency and improving social outcomes.

People sit with VR headsets

However, many areas of social life are complex and unpredictable, while others are beset by conflicts between values that are all seen as desirable, yet may be incommensurable. In these areas, the engineering approach is highly likely to backfire. This tension can be clearly seen in the use of AI in education, as Yi-Ling Liu explores in her essay, where the most ‘efficient’ form of education is not necessarily the most effective pedagogically. Technocratic decision-making enhanced by technology is simply not a substitute for the agility and flexibility required to navigate the ultimate difficulty of governance: operating under uncertain circumstances, with unpredictable outcomes, on the basis of incomplete information. This, incidentally, is the same criticism made against the Silicon Valley approach discussed earlier: if the technocratic logic is all, where is the room for judgment and accountability?

Authors

Rogier Creemers

Assistant professor in the Law and Governance of China at Leiden University and associate fellow of the Hague Program for Cyber Norms, Institute of Security and Global Affairs