AI applications in social credit systems: privacy and transparency

www.nesta.org.uk/report/chinas-approach-to-ai-ethics/ai-applications-social-credit-systems-privacy-and-transparency/
Skip to content

As Dev Lewis shows in his essay on social credit systems, pervasive data collection and linking pose significant privacy concerns. By breaking data silos and linking public data collected by different government departments and corporate actors, these systems enhance access to this data and, at the same time, the risk of privacy invasion. The essay also shows that addressing these pervasive data collection concerns are regulations limiting the types of sensitive information that can be collected (like religious faith). However, private data is still collected and linked in order to achieve social credit scoring, putting citizen’s privacy at risk if abused.

While the mechanisms of credit score systems are not unique to China (and use similar methods to systems already used in the USA, for example) the governmental-level development and deployment of such an initiative is, again, unique in scale. With many competing initiatives, personal data is collected and linked by multiple actors, creating multiple vulnerability points. Given China’s observed approach to privacy violations and data leaks, we may expect the forthcoming Credit Law to address privacy protection (although Dev Lewis notes that local experts believe this is still far from enough).

'While privacy laws in China protect citizens’ data from abuse by non-governmental actors, they do not limit the government’s access to and use of private data'

Another ethical issue highlighted by this case study is transparency. In particular, transparency around how these social credit scores are calculated and what are the repercussions of errors in calculations. Without knowledge of how these mechanisms work, citizens will have a hard time appealing their decisions. The models and variables used to calculate a citizen’s score remain largely opaque, making auditing these systems nearly impossible. Without such basic transparency, inequality between citizens with high and low scores may further exacerbate, unchecked.

Social credit scores are still largely voluntary at the moment and improving transparency is essential before making them mandatory. The lack of transparency in credit scoring schemes is a globally shared ethical issue we must collectively tackle before using AI in public services that have the ability to grant or deny access to other critical public services.

Authors

Danit Gal

Technology advisor to the UN Secretary General’s High-level Panel on Digital Cooperation and associate fellow at the Leverhulme Centre for the Future of Intelligence at the University …