Today Nesta is launching a new pilot project ‘Mapping AI Governance’, an information resource about global governance activities related to artificial intelligence. It features a searchable database, a map and a timeline of AI-related governance activities, such as national strategies, regulations, standardisation initiatives, ethics guidelines, and recommendations from various bodies. We want to use the map as a way of advancing discussions on Artificial Intelligence (AI) governance and regulation, as well as helping researchers, innovators, and policymakers gain a useful tool to better understand what’s happening around the world.
Artificial intelligence, often referred to as “the new electricity”, is poised to drive economic growth and development over the next decades, contributing to the solution of some of the world’s most pressing problems. However, the risks and downsides of unchecked AI deployment, highlighted by examples of biased algorithms, increased profiling, or the Cambridge Analytica scandal, demonstrate an urgent need for better governance frameworks. In response, more and more governments around the world are crafting national AI strategies and committing significant amounts of funding to enhance their research capacity, boost their innovation ecosystems, support the up and re-skilling of their labour force and create innovation-friendly regulations that mitigate against the harms of AI.
Other key focus areas are open data initiatives and the establishment of test environments and sandboxes for the safe and speedy development of AI applications. Industry actors are also engaged in self-regulation, mostly expressed in the form of AI ethics codes or advisory bodies, however, it is unclear to what extent such codes actually influence company practices, or how far the remit of ethics bodies extends. In the absence of binding norms, self-regulatory measures outside industry are emerging as well, which seek to establish a set of values and practices as ethical minimums, such as the Safe Face Pledge, the Global Data Ethics Pledge, or the Lethal Autonomous Weapons Pledge.
It is safe to say that a wide-ranging consensus has emerged around what the most pressing issues are, namely: fairness, accountability and transparency, as well as making sure that the use of AI systems upholds fundamental human rights. The next step is to really begin the tricky job of translating general principles into regulations and standards. As the technology and its applications continue to develop and evolve we need to make sure the approach to regulation or standards allows room for flexibility while enabling innovation that can deliver public good and mitigate potential areas of harm. Currently, self-driving vehicles and automated decision systems are the domains where most concrete regulatory action is taking place outside of the work being done on data protection.
While much of the mainstream discussion around AI focuses on the two biggest technology players, the USA and China, other initiatives from around the world are quietly leading the way. In many ways the UK has also positioned itself as a leader in this space through the creation of the Centre for Data Ethics and Innovation and the Regulators Pioneer Fund. Canada is taking strong action on responsible AI use in government and will be the first country to implement a directive laying out the rules of applying algorithms in the public sphere. Similarly, while the US still lacks any federal laws on self-driving cars, countries like Austria and Singapore are pursuing comprehensive approaches to modernising the entirety of their transportation systems towards autonomous mobility. Such initiatives include a straightforward regulatory environment, large-scale public-private partnerships and they address a range of issues around autonomous mobility that go beyond technology-readiness levels, from infrastructural requirements to societal acceptance.
Whether AI turns out to be the overwhelmingly beneficial force that many hope for, or whether it propels us closer to a dystopian future, depends solely on how we decide to govern the technology. The Mapping AI Governance pilot forms part of Nesta’s work on anticipatory regulation, which seeks to develop a proactive and flexible framework of regulation for new and emerging technologies. We hope this resource can kickstart broader discussions and help facilitate learning from best practices from around the world. We look forward to hearing from you about how you are using the prototype and what we could do to make it better.
View the mapMap of the global AI governance landscape
This is a map is a pilot version that we are keen to build on with the larger expert community. Our plan is to develop both the information and functionality of the map in partnership with other organisations who can help us do this. If you are interested then please get in touch, we would also love to hear your comments on the map as it is- how useful is it, what could we improve?