A roadmap for AI: 10 ways governments will change (and what they risk getting wrong)

There is a fever of excitement about AI and government at the moment, part of the wider fever around AI’s implications for just about everything. Unfortunately, much that’s been written about the prospects of AI in government is frustratingly vague or just plain wrong. I’m pretty sceptical of claims that AI will make government evidence based or decimate the number of public employees, for example. Understanding of the issues is further confused by Hollywood films that show far more advanced AI than is possible now, and by warnings of out-of-control super intelligences, which may be valid, but a few decades from now rather than in the near future.

So here I attempt a summary of what AI could mean in the medium term for governments, partly based on the many practical ways Nesta is involved in AI, from investment and research to experiments (summarised in this note). I hope for far more use of AI in government and am convinced that in the long run it could improve almost everything governments do, from democratic consultation to tax collection, education to war, amplifying productivity by tackling the more repetitive things governments do, and freeing up resources for tackling more complex problems. But along the way I expect we’ll see disappointment and disillusion because of overhype, deep problems of bias and ethical failure, and possibly too some really big disasters. Expect a bumpy ride.

The many shades of AI

A useful starting point is to recognise that there are many different types of AI technology relevant to governments. These include machine learning, which essentially means making sense of patterns in data to better make predictions, and deep learning, the more recent form of machine learning that allows for more complex tasks to be performed. This part of AI has received the lion’s share of attention in recent years. But others that are relevant include computer vision, speech recognition and natural language processing, various data analytic tools (like model clustering, principal components analysis etc) and robotics (and this list is quite a helpful guide to AI functions that are already available in reality as opposed to blockbuster movies).

The promise

The promise of these various tools is to help governments make better informed decisions, whether at the level of policy, implementation or use of public services. They may make it easier to personalise services, particularly in medicine and healthcare; to predict and prevent problems such as water shortages; to enhance interactions through chatbots and cognitive assistants; and to augment human capabilities, for example of a fire officer entering a burning building. What follows is a list of ten major areas of likely change.

1. Automating everyday processes

In theory the easiest aspect of AI is the automation of routine, repetitive processes like sending out tax reminders, processing payments, enforcing regulations or fines, and streamlining a multitude of back office functions like payroll. This is probably the least exciting area for AI but the one with the biggest early productivity gains to offer, so long as whole processes are reengineered.

2. Pattern recognition and better prediction

Predictive algorithms have been used for many years in public services, whether for predicting risks of hospital admissions or recidivism in criminal justice. Newer ones could predict exam results or job outcomes or help regulators predict patterns of infraction. It’s useful to be able to make violence risk assessments when a call comes into the police, or to make risk assessments of buildings (sadly not used in London before Grenfell). Health is already being transformed by much better detection of illness, for example, in blood or eye tests. In other fields too AI can be very good at spotting anomalous patterns, including fraud. One of Nesta’s investments, Featurespace, does this very effectively in some commercial services, and there are many other possible applications in inspection and policing, identity theft or insider trading.

These uses of AI have lots of challenges, not least of which is avoiding the bias embedded in past data sets (like the LSI-R test used in the US for recidivism which is based on men, but is a poor predictor for women). Most public data sets are also a lot dirtier than commentators realise. Then there are the challenges of using these methods in fields where environments change (and contrary to the claims of proponents of ‘social physics’ most social patterns are not lawlike in nature). But wherever there are reliable and large data sets and stable causal relationships there should be scope for much more use of machine learning in the future.

3. Enhancing interactions with citizens

Governments are beginning to use Alexa and Siri and bots of various kinds to handle everyday interactions. Better handling of language has a huge range of uses in government, helping citizens make sense of law and regulations, analysing large-scale consultations or helping with translation whether at the UN and European Union or in dealing with refugees and travellers. Bots have the potential to replace a fair proportion of communication with a structured process for dealing with issues, from planning applications to school places. I expect these to become much more widespread, partly because of cost and partly because many of us prefer to deal with a faceless computer than a faceless official. There will be interesting dynamics as government bots interact with citizen bots like the DoNotPay bot, which helped hundreds of thousands of people to successfully challenge their parking tickets. There’s also scope for using AI to shape SMS or online nudges - like the text messaging to parents Nesta has supported or public health messages. In short, we’re already well down the road to a very different model of interaction between states and citizens.

4. New ways of seeing

Computer vision is already obviously useful for security and surveillance. Combined with sensors, it could become much more integral to management of infrastructures (is this bridge at risk of falling down?) and regulation (is this factory emitting more than the permitted pollutants?). We’re already used to CCTV looking at cars, and China now has an extraordinary, and scary, system for nationwide facial recognition. Some of the big issues will concern how this new power is used and made accountable. An all-seeing state is a mixed blessing, to say the least.

5. Robotics

Robotics has been pioneered in the military, partly to increase kill rates and partly to reduce casualties. Robots can be very useful for entering dangerous environments, such as after manmade or natural disasters. There are many other uses in and around public services, such as cleaning and maintenance. Our Flying High programme, for example, is working with cities and national government on the many potential uses of drones to provide public benefit, most with a significant AI element.

6. Targeting social programmes

With SIX, we’re looking at bigger projects that combine governments, foundations and business to interpret data on whole population patterns, using predictive algorithms to better target action. Saskatchewan in Canada and Allegheny in the US are leading examples that are trying to use a mix of big data, AI and smart social policy to better predict and prevent risks. We’re currently bringing together other examples to think through some of the practical challenges in these projects, like who should own the data and algorithms.

7. Accelerating education

There’s a flood of good ideas for AI in education, some of which Nesta has invested in (like Cogbooks). In a paper last year I described some of the tools being used in higher education. For maths, language teaching and a few other areas, the potential is big, though as with edtech more generally there’s been a shortage of good evidence and testbeds.

8. Enhancing democracy

Democracy may seem a surprising place for AI to play a role. But as we’ve shown, some of the leading experiments in online democracy use AI tools like pol.is to help participants understand the balance and landscape of opinion; showing clusters of views or how different people’s views relate to each other.

9. New jobs

Our detailed study of future jobs showed that the cruder forecasts in which AI simply replaces doctors or teachers is almost certainly wrong. Look in more detail at the cluster of skills in jobs and it becomes apparent that although some aspects are very amenable to automation others are not. So expect doctors to work with diagnostic AIs, but not to be replaced by them; expect teachers to work alongside personalised learning AIs in maths but not to be replaced by them. Indeed we forecast job numbers in some public services – like teaching – to grow not shrink in the UK and US, and my guess it that today’s futurologists will turn out almost as wrong as their predecessors, who consistently misunderstood how real labour markets work. But there is a big task to be done in helping the restructuring of jobs, and helping many existing workers adapt: that’s where the weaknesses of UK labour market supports and adult learning will be shown in particularly stark relief. One other much less noticed pattern is the likely creation of new jobs to supervise the AIs. As my colleague Juan Mateos Garcia has pointed out, these may be significant, particularly where AI is handling high stakes decisions, as in justice or surgery.

10. New forms of regulation and new guiding principles

I’ve written elsewhere about how the regulation of AI could evolve and its place in the broader field of anticipatory regulation. There are big, and still unanswered, questions over transparency, ownership and responsibility. My colleague Eddie Copeland’s recent draft Principles for public services use of AI is an important step forward in setting some new ground rules. One other pressure for new regulation is the likelihood of crisis. AI promises more predictable and controlled public services. But a striking feature of complex systems is that they become opaque even to their creators. More complex and interconnected systems go wrong in ways that no one quite understands. Some crises will be the result of attack. Some will be the result of unforeseen interactions like flash trading algorithms feeding off each other. Wannacry was a signal of where we could be heading with intentional attacks; the BA shutdown last year a signal of some of the unintentional crises that could become more normal.

What’s missing and what’s odd?

Governments are keener than ever on AI. But there’s something very odd about the way they come at this issue. Governments have been by far the greatest investors in AI - whether through military and intelligence agencies (particularly in the US and China), or less directly through universities. Despite the huge scale of this investment I can find no examples of systematic government investment in R&D around AI for the main non-military activities of government, from tax and welfare to education and criminal justice. Instead, governments largely depend on the spill-overs from the military or from commercial developments. I still don’t understand why no governments recognise how foolish this is and put it right.

So what should be done?

The risk is that governments will oscillate between over enthusiasm as they buy into misleading hype and disillusion when the promised results don’t materialise. The answer is that they need more in-house capability to be smart customers and commissioners; more serious R&D and experiments; as well as more serious efforts to deal with public trust and legitimacy, like the UK’s promised new Centre for Data Ethics and Innovation.

Above all they need to think not just about how to use AI, but also about how to design intelligence better to achieve outcomes, which will take them to combinations of artificial and collective intelligence. For this we need a more mature field of intelligence design, the theme of my new book.

I’m fairly confident this will be common sense in a decade or two. But for now too much of the debate is repeating the consistent mistake made in past attempts at digital government - focusing on how to use shiny new tools rather than on what’s really needed to make systems more intelligent, or to put it another way, seeing technology as an end not a means.

Geoff Mulgan is speaking at the Westminster eForum Keynote Seminar: Artificial intelligence and robotics on 27 February 2018 in London.

Author

Geoff Mulgan

Geoff Mulgan

Geoff Mulgan

Chief Executive Officer

Geoff Mulgan was Chief Executive of Nesta from 2011-2019.

View profile