Skip to content

No countries have yet worked out how to answer the innumerable questions posed by rapid advances in artificial intelligence (AI). That will change in 2018, as governments take the first serious steps towards regulating AI and guiding it towards safer and more ethical uses. They’ll try to speed up the development of new technologies, firms and industries, while also protecting the public from harm. That won’t be easy. But this will become one of the liveliest fields for innovation in government.

2017 brought unprecedented interest in AI. Wall-to-wall media coverage fed a heady mix of apocalyptic fears, hype and hot air. Parliamentary inquiries and grand commissions recommended councils of data ethics or data stewardship bodies. Commentators dived in, warning that jobs would disappear or that we’d live our lives dominated by Big Brother algorithms. Others saw the same trends through the lens of competition – with the UK and China among many countries placing AI at the heart of ambitious industrial strategies.

Struggling to keep up with technical possibility

Policy and regulation have struggled to keep up. The job of regulating new applications of AI falls uneasily between communications regulators, data protection bodies and government departments. None feel confident that they can make informed, wise and legitimate decisions about the tricky questions that are now coming into view.

The potential gains from AI are huge. We already use many forms of AI every day in our phones and homes, and it is touching almost every area of business. AI is being used in public services; whether by local councils using bots for planning, or probation services using machine learning to predict which criminals will reoffend. It’s being used to help us as citizens: with hundreds of thousands of people successfully challenging parking tickets; or to promote the public good, for example, identifying health and safety violations or rogue landlords; and in healthcare, through AI-powered diagnostic tools. In many fields, there’s little doubt that AI, used well, could help people make better and more ethical decisions, whether as drivers or as surgeons.

But there’s also no doubt about the scale of the risks. There’s a huge asymmetry between those using the algorithms and those whose lives are affected - and asymmetries create scope for abuse and exploitation.

Governments get serious about regulating AI

I predict that 2018 will be the year when governments get serious about regulating AI to contain these risks. Here are just a few of the challenges they’ll have to grapple with: dealing with bias in algorithms (of the kind found in the US criminal justice system); deciding who should have the right to see algorithms (such as Facebook’s algorithm to predict if someone is suicidal) and who should have the right to see the data that lies behind algorithmic decisions; whether to prevent firms shaping algorithms in manipulative ways, like recommending more expensive care options or fake news; determining who is liable when an AI causes harm, whether in a driverless car or a medical procedure; overseeing how machines are taught how to make ethical decisions - like balancing the value of the life of a driver and a pedestrian; deciding whether to allow AI-driven flash trading in financial markets, or whether AI-based bots can pretend to be human; or regulating research (like Stanford’s project to predict whether someone is gay, based on an analysis of their face).

No one yet has definitive answers to these, and many other questions. But there’s a lively debate about what principles should guide the answers, with suggestions pouring in from organisations like the IEEE, The Future of Life Institute, the ACM and the Oxford Internet Institute, and the Nuffield Foundation, which has set up a Convention on Data Ethics. In the US, debate is being fed by bodies like AI Now and Danah Boyd's Data & Society, and interested academics like Ryan Calo. A couple of years ago, I proposed some answers in the form of a Machine Intelligence Commission. Others have made proposals for third parties to adjudicate when algorithms are biased, and to develop professional codes of ethics.

So far the suggestions have tended to be rather general and have provided little guidance to practitioners dealing with everyday ethical problems. They’ve done little to shift how AI is being designed and used. But governments are now realising that more serious action is needed. In November 2017, the UK Government announced a new Centre for Data Ethics and Innovation, described as "a world-first advisory body to enable and ensure safe, ethical innovation in artificial intelligence and data-driven technologies", but included no detail on its powers, roles or people (other than a budget commitment of £9 million).

The European Commission has been asked to create an agency for robotics and artificial intelligence, in order to help public authorities with technical, ethical and regulatory expertise. The incoming General Data Protection Regulation (GDPR) already promises that, for automated decision-making, data subjects have the right to access “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject."

It’s still unclear what these rights will really mean. But as governments move more confidently into this space, here are a few tests to judge whether they are serious.

Power: We already have many voluntary and private initiatives. By their nature, these lack teeth. Governments will need to empower new institutions to set standards, investigate abuses and, if necessary, impose penalties.

Crosscutting: Regulation will have to be holistic, cutting across sectors and across different tools. It won’t be enough to see this just as an issue of industrial policy or just as an issue of ethics.

Iterative: We’ll need regulators to do more than implement and police laws. Instead, the pace of change means that they’ll have to be iterative and experimental, discovering as they go, and using new methods of anticipatory regulation.

Engaged and open: It won’t be enough to have clever people working behind closed doors. Public anxieties mean we’ll need visible leaders who can get onto the evening news and talk about how they’re navigating the difficult cases; making their reasoning transparent and open to challenge.

Outcome-oriented: Regulators will have to focus on outcomes, which means that they’ll have to be pragmatic about the tools they use; whether that means commissioning AI to police AI, holding public inquiries, or issuing directives. This is a very different approach from many current regulators, which are entirely process based and respond to cases that are brought to them rather than anticipating what’s around the corner.

There is no playbook on how to do this well. AI will require responses at many levels, from regulation to self-regulation, law to standards, health to warfare, tax to terrorism. The debate is likely to swing between excessive complacency and excessive fear (it’s worth remembering that robots and AI still can’t tie shoelaces, make a cup of tea or make up a good joke). There’s bound to be an uneasy division of labour with existing bodies, including regulators and specialist institutions, like the Information Commissioner's Office (ICO), who may try to defend their turf.

But there’s no alternative to getting serious. And although some countries may be tempted by a race to the bottom - an ‘anything goes’ environment for AI - history suggests that the most successful places will be ones that can shape clear, firm and fair regulations that are good at coping with rapid change.

Illustration: Peter Grundy​