Collective intelligence as humanity's biggest challenge

In the last few months the world’s media have noticed artificial intelligence programmes that can surpass humans at the most complex games like Go, joined in the excitement around driverless cars and helped to fuel fears that robots are set to take millions more jobs.

We now live surrounded by new ways of thinking, understanding, and measuring. Some involve data— mapping, matching, and searching for patterns far beyond the capacity of the human eye or ear. Some involve analysis— super computers able to model the weather, play chess, or diagnose diseases (for example, using the technologies of firms like Google’s DeepMind or IBM’s Watson). Some pull us ever further into what the novelist William Gibson described as the “consensual hallucination” of cyberspace.

These all show promise. But there is a striking imbalance between the smartness of the tools we have around us and the more limited smartness of the results.

The internet, World Wide Web, and the Internet of Things are major steps forward in the orchestration of information and knowledge. Yet it doesn’t often feel as if the world is all that clever

Technologies can dumb down as well as smarten up. Many institutions and systems act much more stupidly than the people within them, including many that have access to the most sophisticated technologies. Martin Luther King Jr. spoke of “guided missiles but misguided men,” and institutions packed with individual intelligence can often display collective stupidity or the distorted worldview of “idiots savants” in machine form. New technologies bring with them new catastrophes partly because they so frequently outstrip our wisdom (no one has found a way to create code without also creating bugs, and as the French philosopher Paul Virilio put it, the aircraft inevitably produces the air disaster).

So what should we do? My central claim is that every individual, organisation, or group could thrive more successfully if it tapped into a bigger mind— drawing on the brain power of other people and machines

There are already some three billion people connected online and over five billion connected machines. But making the most of them requires careful attention to methods, avoidance of traps, and investment of scarce resources. As is the case with the links between neurons in our brain, successful thought depends on structure and organisation, not just the number of connections or signals. This may be more obvious in the near future. Children growing up in the twenty-first century take it for granted that they are surrounded by sensors and social media, and their participation in overlapping group minds— hives, crowds, and clubs— makes the idea that intelligence resides primarily in the space inside the human skull into an odd anachronism. Some feel comfortable living far more open and transparent lives than their parents, much more part of the crowd than apart.

The great risk in their lifetimes, though, is that collective intelligence won’t keep up with artificial intelligence. As a result, they may live in a future where extraordinarily smart artificial intelligence sits amidst inept systems for making the decisions that matter most.

To avoid that fate we need clear thinking. For example, it was once assumed that crowds were by their nature dangerous, deluded, and cruel. More recently the pendulum swung to an opposite assumption: that crowds tend to be wise. The truth is subtler. There are now innumerable examples that show the gains from mobilizing more people to take part in observation, analysis, and problem solving. But crowds, whether online or off-line, can also be foolish and biased, or overconfident echo chambers. Within any group, diverging and conflicting interests make any kind of collective intelligence both a tool for cooperation and site for competition, deception, and manipulation.

Taking advantage of the possibilities of a bigger mind can also bring stark vulnerabilities for us as individuals. We may, and often will, find our skills and knowledge quickly superseded by intelligent machines. If our data and lives become visible, we can more easily be exploited by powerful predators. For institutions, the rising importance of conscious collective intelligence is no less challenging and demands a different view of boundaries and roles. Every organisation needs to become more aware of how it observes, analyses, remembers, and creates, and then how it learns from action: correcting errors, sometimes creating new categories when the old ones don’t work, and sometimes developing entirely new ways of thinking. Every organisation has to find the right position between the silence and the noise: the silence of the old hierarchies in which no one dared to challenge or warn, and the noisy cacophony of a world of networks flooded by an infinity of voices.

That space in between becomes meaningful only when organisations learn how to select and cluster with the right levels of granularity— simple enough but not simplistic; clear but not crude; focused but not to the extent of myopia

Few of our dominant institutions are adept at thinking in these ways. Businesses have the biggest incentives to act more intelligently and invest heavily in hardware and software of all kinds. But whole sectors repeatedly make big mistakes, misread their environments, and harvest only a fraction of the know-how that’s available in their employees and customers. Many can be extremely smart within narrow parameters but far less so when it comes to the bigger picture. Again and again, we find that big data without a big mind (and sometimes a big heart) can amplify errors of diagnosis and prescription.

Democratic institutions, where we, together, make some of our most important decisions, have proven even less capable of learning how to learn. Instead, most are frozen in forms and structures that made sense a century or two ago, but are now anachronisms. A few parliaments and cities are trying to harness the collective intelligence of their citizens. But many democratic institutions— parliaments, congresses, and parties— look dumber than the societies they serve. All too often the enemies of collective intelligence are able to capture public discourse, spread misinformation, and fill debates with distractions rather than facts.

So how can people think together in groups? How might they think and act more successfully? How might the flood of new technologies available to help with thinking— technologies for watching, counting, matching, and predicting— help us together solve our most compelling problems?

In my book, I describe the emerging theory and practice that points to different ways of seeing the world and acting in it. Drawing on insights from many disciplines, I share concepts with which we can make sense of how groups think, ideas that may help to predict why some thrive and others falter, and pointers as to how a firm, social movement, or government might think more successfully, combining the best of technologies with the best of the gray matter at its disposal. I sketch out what in time could become a full-fledged discipline of collective intelligence, providing insights into how economies work, how democracies can be reformed, or the difference between exhilarating and depressing meetings. Hannah Arendt once commented that a stray dog has a better chance of surviving if it’s given a name, and in a similar way this field may better thrive if we use the name collective intelligence to bring together many diverse ideas and practices.

The field then needs to be both open and empirical. Just as cognitive science has drawn on many sources— from linguistics to neuroscience, psychology to anthropology— to understand how people think, so will a new discipline concerned with thought on larger scales need to draw on many disciplines, from social psychology to computer science, economics to sociology, and use these to guide practical experiments. Then, as the new discipline emerges— and is hopefully helped by neighbouring disciplines rather than attacked for challenging their boundaries— it will need to be closely tied into practice: supporting, guiding, and learning from a community of practitioners working to design as well as operate tools that help systems think and act more successfully.

Collective intelligence isn’t inherently new, and throughout the book I draw on the insights and successes of the past, from the nineteenth-century designers of the Oxford English Dictionary (OED) to the Cybersyn project in Chile, from Isaac Newton’s Principia Mathematica to the National Aeronautics and Space Administration (NASA), from Taiwanese democracy to Finnish universities, and from Kenyan web platforms to the dynamics of football teams. In our own brains, the ability to link observation, analysis, creativity, memory, judgment, and wisdom makes the whole much more than the sum of its parts. In a similar way, I argue that assemblies that bring together many elements will be vital if the world is to navigate some of its biggest challenges, from health and climate change to migration. Their role will be to orchestrate knowledge and also apply much more systematic methods to knowledge about that knowledge— including metadata, verification tools, and tags, and careful attention to how knowledge is used in practice. Such assemblies are multiplicative rather than additive: their value comes from how the elements are connected together. Unfortunately, they remain rare and often fragile.

To get at the right answers, we’ll have to reject appealing conventional wisdoms. One is the idea that a more networked world automatically becomes more intelligent through processes of organic self-organisation. Although this view contains important grains of truth, it has been deeply misleading. Just as the apparently free internet rests on energy-hungry server farms, so does collective intelligence depend on the commitment of scarce resources. Collective intelligence can be light, emergent, and serendipitous. But it more often has to be consciously orchestrated, supported by specialist institutions and roles, and helped by common standards. In many fields no one sees it as their role to make this happen, as a result of which the world acts far less intelligently than it could. The biggest potential rewards lie at a global level. We have truly global internet and social media. But we are a long way short of a truly global collective intelligence suitable for solving global problems— from pandemics to climate threats, violence to poverty. There’s no shortage of interesting pilots and projects. Yet we sorely lack more concerted support and action to assemble new combinations of tools that can help the world think and act at a pace as well as scale commensurate with the problems we face. Instead, in far too many fields the most important data and knowledge are flawed and fragmented, lacking the organization that’s needed to make them easy to access and use, and no one has the means or capacity to bring them together.

Perhaps the biggest problem is that highly competitive fields— the military, finance, and to a lesser extent marketing or electoral politics— account for the majority of investment in tools for large-scale intelligence. Their influence has shaped the technologies themselves. Spotting small variances is critical if your main concern is defense or to find comparative advantage in financial markets. So technologies have advanced much further to see, sense, map, and match than to understand. The linear processing logic of the Turing machine is much better at manipulating inputs than it is at creating strong models that can use the inputs and create meanings. In other words, digital technologies have developed to be good at answers and bad at questions, good at serial logic and poor at parallel logic, and good at large-scale processing and bad at spotting non-obvious patterns. Fields that are less competitive but potentially offer much greater gains to society— such as physical and mental health, environment, and community— have tended to miss out, and have had much less influence on the direction of technological change.

The net result is a massive misallocation of brainpower, summed up in the lament of Jeff Hammerbacher, the former head of data at Facebook, that “the best minds of my generation are thinking about how to make people click ads”

The stakes could not be higher. Progressing collective intelligence is in many ways humanity’s grandest challenge since there’s little prospect of solving the other grand challenges of climate, health, prosperity, or war without progress in how we think and act together. We cannot easily imagine the mind of the future. The past offers clues, though. Evolutionary biology shows that the major transitions in life— from chromosomes to multicellular organisms, prokaryotic to eukaryotic cells, plants to animals, and simple to sexual reproduction— all had a common pattern. Each transition led to a new form of cooperation and interdependence so that organisms that before the transition could replicate independently, afterward could only replicate as “part of a larger whole.”

Each shift also brought with it new ways of both storing and transmitting information. It now seems inevitable that our lives will be more interwoven with intelligent machinery that will shape, challenge, supplant, and amplify us, frequently at the same time. The question we should be asking is not whether this will happen but rather how we can shape these tools so that they shape us well— enhancing us in every sense of the word and making us more of what we most admire in ourselves. We may not be able to avoid a world of virtual reality pornography, ultra-smart missiles, and spies. But we can create a better version of collective intelligence alongside these— a world where, in tandem with machines, we become wiser, more aware, and better able to thrive and survive.

‘Big Mind: how collective intelligence can change our world’ by Geoff Mulgan is published by Princeton University Press. Geoff will be launching his book Big Mind: how collective intelligence can change our world on Thursday 7 December at Nesta. Register to attend.

Author

Geoff Mulgan

Geoff Mulgan

Geoff Mulgan

Chief Executive Officer

Geoff Mulgan was Chief Executive of Nesta from 2011-2019.

View profile