Intelligence as an outcome not an input

The world of artificial intelligence (AI) is full of very smart people. Its pioneers are justly treated like rockstars, as breakthroughs continue to be made in fields as diverse as games and diagnostics. Presidents, Prime Ministers and CEO do all they can to share some of their stardust, and research is pulling in vast sums of money from companies like Google, Facebook and Alibaba, and from governments.

I’m an enthusiast for AI (and its related fields of data science, computing or machine intelligence). Nesta has been involved as an investor, user, researcher, convenor and proposer around AI for many years. I don’t doubt that in its many forms it will become ever more embedded in the great majority of machines and processes that surround us. But for me it’s still an open question whether this field will be not just smart but also wise. Many fields have turned out to be clever but foolish. I hope AI won’t be one of them.

So far AI’s leaders have shown a healthy determination to reduce the risks that their technologies do harm. The many moves around ethics and bias signal widespread understanding that powerful technologies bring with them big responsibilities (even if the endless theoretical discussions about trolley problems have often distracted from the more pressing and subtle ethical challenges of AI in the real world). We’re also just beginning to see a more serious engagement with the huge carbon costs of machine learning.

Many fields have turned out to be clever but foolish.

But the leaders of AI have yet to make a shift in thinking that could be just as vital if their technologies are really going to do good. This is the shift to thinking of intelligence in terms of outcomes rather than inputs.

Technologists inevitably think about how their tool, gadget or algorithm can be used in the world. They start with a solution and then look for problems. This is a necessary part of any innovation process.

But in most fields it’s even more productive to think the other way around: to start with a need or outcome and then look for answers or tools that can help.

There is a long history of digital technologists getting this wrong. From smart cities and smart homes to digital government, too many have focused on inputs rather than outcomes, hyping fancy applications or hardware that didn’t really meet any needs that matter. Invariably this has led to disappointment, wasted money and backlashes (I’ve lived through quite a few of these cycles). Too many involved in AI may now be repeating exactly the same mistakes.

The world badly needs smarter ways of achieving outcomes, whether for running governments and businesses, education and health systems or media. But it’s a paradox, perhaps the paradox, of our times that proliferating smart technologies have so often coincided with stupider systems.

To understand this, and how it can be avoided, requires better theory as well as better practice. My contention is that the dominant theories around AI are inadequate for either diagnosing the problem or offering good answers.

But it’s a paradox, perhaps the paradox, of our times that proliferating smart technologies have so often coincided with stupider systems.

Many involved in data and AI are of course working on better outcomes. Better delivery schedules, click-through results or diagnostics. Outcome metrics drive much of the hard work underway in start-ups and vast multinationals (and hyperparameter optimisation methods formalise this). Meanwhile some are also grappling with outcomes in more complex systems - like sanitation or multimodal transport systems.

But the risk of messing up is high. Engineering theory has always recognised that you can optimise one element of a system in ways that leave the whole system less optimised. Some are beginning to grapple with this - not least because of examples like Facebook that optimised click-throughs in ways that left a neighbouring system, democracy, badly damaged. Fields like educational technology are full of examples of tools that appeared to deliver results, but when looked at in context had little or no effect (as Bill Gates put it last month, at least edtech has probably not done much harm). But I’ve searched long and hard, without success, to find the theories, frameworks and examples, of genuinely outcome-based approaches involving AI.

Why is this? If you really want more intelligent outcomes, and systems that consistently achieve more intelligent outcomes, three conclusions quickly follow, all of which challenge current AI orthodoxy and should be pushing its pioneers to develop new methods.

First, intelligence in real systems depends, just like human brains, on combining many elements: observation, data, memory, creativity, motor coordination, judgement and wisdom. AI can contribute a great deal to some of these elements, like prediction where there are large datasets, or the organisation of memory, or the management of warehouses and recommendation engines. But it offers very little to others, especially those involving nuanced judgements in conditions of uncertainty. So an obvious conclusion is that any serious exercise in intelligence design has to be concerned with hybrids: combinations of machine and human intelligence. This happens all the time in practice. But there is surprisingly little codified method.

A second conclusion is that this work of combination requires quite complex design, for example including mechanisms to encourage people to share the right ideas and information; shared taxonomies; incentives; culture; and defences against the risks of bias and systematic error. We need, and will need even more in the future, both human supervision of machines and machine supervision of humans. Computer and data science offer some vital insights into how these processes need to be designed - but without psychology, organisation, economics, decision science and sociology there are bound to be huge errors. I often come across people who claim to do this more holistic design. But when I ask them to share their methods they go silent, and there are no universities teaching this (if I’m wrong, please share the curriculum).

We need both human supervision of machines and machine supervision of humans.

A third conclusion is that intelligence in the real world involves continuous learning. AI has a lot to contribute to first level learning - adapting an algorithm (activity or process) to new data. But it can do little to help second loop learning - which is when we recognise that our existing categories and frameworks are no longer adequate and that we need new categories or concepts to make sense of the world. AI can play a small role in this kind of learning but is poorly suited to it in practice, and even less suited to third loop learning where the whole system of cognition is redesigned. Often AI forms a part of redesigned cognitive systems, such as how traffic in a city should ‘think’. But there is no AI on the planet which can design the new operating system itself.

These points reinforce a profound lesson from history that’s been little debated in the AI field. AI is often talked of as a ‘general purpose technology’ and in my view it will be even more general purpose than past ones like the car or electricity. But a key lesson from these technologies is that they evolved in combination with complementary changes. So the technology of the car (internal combustion, safer bodies, reliable engines, hybrid, satnav) evolved in tandem with rules (road markings, speed limits, emission limits, speed bumps, standards, seat belts, drink-drive), skills for the public (driving lessons and tests), norms (on drink-driving, smoking, polluting, idling) and related innovations (suburbs, supermarkets, platform taxis). On their own technologies can be harmful or ineffective. In combination with rules, norms and complementary innovations they become powerful and useful.

For the AI world, different ways of thinking are badly needed that encourage this kind of combinatorial work. My hope is that taking this kind of intelligence design seriously will open up AI to neighbouring fields in a productive way. One that is close to my heart is the field of collective intelligence, or CI. CI used to be just about things like Wikipedia and the ‘wisdom of crowds’. But it has become more developed in its analysis of how groups can think, act and learn at large scale, encompassing topics like the science of meetings, the practicalities of how millions of people can contribute to citizen science, how consumers can contribute to the work of a company or how a democracy can involve hundreds of thousands of people in shaping decisions, and not just voting in elections and referendums.

On their own technologies can be harmful or ineffective. In combination with rules, norms and complementary innovations they become powerful and useful.

The key to its advance has been to break away from being a solution in search of a problem – which limited earlier work on CI in exactly the way that AI has been limited. Instead, in the most dynamic fields, the question is flipped on its head. Instead of offering crowd wisdom as the answer to any problem, we ask instead how different tools can contribute to intelligent outcomes. For example combining crowd deliberation and AI to enhance democracy, or combining large-scale data aggregation, AI and expert input to improve the workings of labour markets.

This is how we work in Nesta’s Centre for Collective Intelligence Design – whether with the United Nations on achieving the Sustainable Development Goals, improving how labour markets work or working with healthcare systems trying to improve cancer care. Focusing on intelligence as an outcome not an input forces you to address the different contributions of machines and people to observation, creativity, memory and judgement. It takes you quickly to combinations and hybrids. And it encourages humility on the part of the people who come from particular disciplines and backgrounds, who tend to start off like the person with a hammer who sees every problem as a nail.

We’re taking a similar approach to new funds, including the EdTech Innovation Fund for the government which Nesta is running. This aims to finance promising uses of AI to help educational outcomes. But unlike so many other initiatives in this space, we’ll encourage the tech innovators to work closely with teachers, and will run testbeds to try out and improve the technologies, nearly always in combination with human intelligence.

I welcome the big investment in AI around the world and the extraordinary excitement in the field. But I fear that energies are being misdirected. Investment in AI is many thousands of times greater than investment in CI - a small rebalancing of even 1% of the funding flows would deliver big gains.

My plea is simple. In most of the fields I’ve worked in – from business and government to charity – it’s useful to work backwards from the outcomes you want to achieve rather than always forwards from the tools you happen to have at your disposal. Many of the best programmers understand this at a micro level. But its missing from the great majority of AI thinking right now. Addressing this simple truth now would do a lot to help the AI world avoid an all-too-possible future of disappointment.

Nesta has been involved in AI as an investor, through research both of AI and using AI, and through convening, running funds and promoting new ideas. You can find summaries here.

Author

Geoff Mulgan

Geoff Mulgan

Geoff Mulgan

Chief Executive Officer

Geoff Mulgan was Chief Executive of Nesta from 2011-2019.

View profile