This month, we take a look at the ways that collective intelligence can be applied at work - with positive and negative results. From more diverse teams to human-computer partnerships, we explore how the idea of ‘distributed cognition’ can help us to reimagine the potential of artificial intelligence and collaboration.

As always, if you’ve come across any books, research papers, articles or other resources that you think we should have featured in this blog, please let us know in the comments below.

The 'creative friction' of diverse ideas

The lone genius is dead. According to academic Scott E. Page, “the complexity of modern problems often precludes any one person from fully understanding them” (Scott E. Page, ‘Why hiring the ‘best’ people produces the least creative results’, Aeon). In the contemporary world, almost no workplace functions without some kind of collective decision-making. This makes your office, workshop, factory or lab an ideal context to explore the ways that people and machines can work together.

To solve ‘modern problems’, Page argues that organisations must orchestrate the diverse abilities of different people. In his view, this means they shouldn’t hire from the same pool. A team of people with similar backgrounds (and similar qualifications) are likely to converge on an approach too quickly when trying to solve a challenge. By settling on a particular way of thinking, homogeneous teams prevent themselves from exploring more innovative and more optimal solutions. There may be an opportunity for AI to steer teams away from groupthink. One of Nesta’s Collective Intelligence Grant winners is exploring how ‘intelligent agents’ can nudge people away from their biases (‘Multi-agent systems for enhancing collective intelligence’, Nesta)

Team

Homogeneous teams are less likely to have innovative ideas

Managing diverse teams isn’t easy. How do team-members exploit their different viewpoints to produce results, rather than frustration? Understanding how human interaction can be mediated to foster collective intelligence is a key challenge in the field, and digital tools like Pol.is are being developed to help people with opposing ideas to collaborate (Theo Bass, ‘Crowdsourcing for democracy using Wikisurveys’, Nesta).

In the view of the sociologist David Stark, innovations occur within ‘organizational forms’ that enable different - and sometimes competing - principles of value to be kept ‘in play’. Stark found that the cultivation of ‘organized dissonance’ to foster ‘creative friction’ was crucial to the success of the 21st century organisations he studied (David Stark, The Sense of Dissonance: Accounts of Worth in Economic Life). It might be for this reason that small teams (which are simpler to organise) have recently been shown to be better at generating disruptive ideas than large teams. (Dashun Wang & James A. Evans, ‘Research: When Small Teams Are Better Than Big Ones’, HBR)

We never work alone: distributed cognition and cognitive ecologies

By demonstrating the importance of organisational forms and calling them ‘cognitive ecologies’, Stark reminds us that we rarely work alone when producing goods, services or knowledge. What if we reimagined ourselves not as individuals, but as parts of a larger cognitive unit?

This is the argument of Edwin Hutchins and Tove Klausen, who observed and analysed the work of commercial airline pilots (Edwin Hutchins & Tove Klausen, ‘Distributed Cognition in an Airline Cockpit’). You can do this for yourself through the large and unexpectedly gripping genre of YouTube cockpit videos. The cockpit of a Boeing 727-200, they describe, requires a crew of three surrounded by dials, buttons and instruments.

“It is the performance of that system, not the skills of any individual pilot, that determines whether you live or die. In order to understand the performance of the cockpit as a system we need, of course, to refer to the cognitive properties of the individual pilots, but we also need a new, larger, unit of cognitive analysis.This unit of analysis must permit us to describe and explain the cognitive properties of the cockpit system that is composed of the pilots and their informational environment. We call this unit of analysis a system of distributed cognition”.

Klausen and Hutchins’ concept of ‘distributed cognition’ gives us a foundation for rich insights. It helps us to understand how extraordinary feats (flying isn’t a basic human capacity) can be achieved through the interaction of humans and machines. By recognising the complementarity of types of cognition within a larger system, it’s easier to highlight the value that artificial intelligence can create when it augments human abilities, rather than replacing them.

Cockpit

'Distributed cognition' in action

Hutchins and Klausen are not the only people to have been inspired by pilots’ demonstrations of distributed cognition. The surgeon Atul Gawande recognised that doctors often make mistakes in complex procedures when their cognitive load is not distributed enough. His answer was to create a checklist: enabling surgeons to outsource cognition to another part of the operating system (*pun intended*). A low tech solution, perhaps, but an example of effective collective intelligence (Atul Gawande, ‘The Century of the System’, BBC Reith Lectures).

Who takes the blame when collective intelligence fails?

Systems which rely on the distributed cognition of humans and computers can go wrong, with sometimes tragic results.

In a recent paper, Data & Society researcher M.C. Elish explores situations in which the automated systems have failed (M.C. Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’). In the cases she presents, accidents occur as a result of interactions between people and systems that have not been designed by engineers with a clear understanding of human behaviour. It’s not intuitive to attribute responsibility to a ‘system’, and so the blame for these incidents often falls on the operators. She refers to this phenomenon as ‘the moral crumple zone’, which ‘protects the integrity of the technological system, at the expense of the nearest human operator’.

The ambiguity of moral responsibility in complex human-machine systems is shown in the harrowing experience of the pilot of Qantas flight QF72, Kevin Sullivan (Kevin Sullivan, 'I’ve become very isolated': the aftermath of near-doomed QF72‘, The Sydney Morning Herald). Sullivan’s malfunctioning computers led to a violent and near-fatal flight. When he writes about the event, however, he describes how he finds himself in the ‘moral crumple zone’, unable to deny responsibility for a system that failed him.

“How can I explain to my passengers that the computers controlling the plane went berserk and in their confusion slammed the aircraft's nose down?...It's a bad state of affairs when the captain of his aircraft doesn't know the reasons for its violent behaviour.”

What do these events and experiences tell us about the design of collective intelligence systems for the workplace? Elish makes two key points that demonstrate the importance of designing human-computer teams with the humans who will be part of them:

  • First, we need to develop more nuanced ideas of accountability that recognise the relationship between humans and machines in the operation of a system. These ideas must be backed by new forms of regulation, and human operators should be consulted and involved when the lines of responsibility are drawn. A recent Nesta report cites ‘inclusive and collaborative’ as one of six central principles for ‘anticipatory regulation’ (Harry Armstrong, Chris Gorst & Jen Rae, ‘Renewing Regulation: ‘anticipatory regulation’ in an age of disruption’, Nesta).
  • Second, human-computer systems should be designed to expand ‘the value and potential of humans’. When people are recognised for their unique capabilities, automation can be designed to allow them to apply and develop new skills rather than overriding them. This principle is echoed by trade unions and the developers of new ‘cobots’ (Robert Wright, ‘How new wave of robotic automation is reshaping industry’, Financial Times).

Precision, productivity and power

In the wrong hands, collective intelligence can become a tool which disempowers people. This was the criticism made of Uber’s use of behavioural ‘nudges’ to increase driver productivity, which were revealed in a 2017 New York Times article (Noam Schreiber, ‘How Uber Uses Psychological Tricks to Push Its Drivers’ Buttons’, New York Times). Uber used knowledge about drivers’ goals and behaviours to design systems that made it more difficult for them to clock out of the app.

‘“Make it to $330.” The text [in the message from Uber] then explained: “You’re $10 away from making $330 in net earnings. Are you sure you want to go offline?” Below were two prompts: “Go offline” and “Keep driving.” The latter was already highlighted.’

More recently, Amazon has been accused of acting in similarly bad faith in the application of ‘gamification’ to its warehouses. The productivity of workers is being reflected in computer games that prompt competition between individuals and teams. According the Washington Post, some workers have responded positively, saying that the games have relieved the boredom of their tedious tasks (Greg Besinger, ‘‘MissionRacer’: How Amazon turned the tedium of warehouse work into a game’, Washington Post)

However, journalist Greg Besinger highlights the potential for a more sinister impact: ‘If the games are helping to push workers to be more productive, it could make those who eschew them appear to be straggling’. In an environment where productivity is precisely monitored, stragglers may pay the price for not playing Amazon’s games (see ‘The Precision Economy’ in Benedict Dellot, Fabian Wallace-Stephens & Rich Mason, ‘The Four Futures of Work’, RSA).

Open Jobs: Collective intelligence for better labour markets

I want to close on a positive note. At Nesta, we believe that better use of data and the application of artificial intelligence can help society to shape the world of work for the better.

By using innovative sources of data about skills and work, from millions of job adverts (Jyldyz Djumalieva & Cath Sleeman, ‘Which digital skills do you really need?’, Nesta) to Wikipedia entries (Sanna Ojanperä, Mark Graham & Matthew Zook, ‘The Digital Knowledge Economy Index: Mapping Content Production’, The Journal of Development Studies), we hope to help people understand which skills it is worth investing in, and develop policies that can help communities to benefit from better options for work.

Our goal is to promote and support the development of new tools, dashboards and services that give people the information they need to navigate the labour market. Follow our Open Jobs project (managed by the Centre of Collective Intelligence Design) to learn more.

What else are we reading?

Sarah O’Connor, ‘Past mistakes carry warnings for the future of work ‘, FT

‘Online platforms that connect customers with workers seem to offer economists a rich source of reliable data on earnings. But as with 18th-century masonry, appearances in 21st-century online work can be deceptive.’

Marwa El Zein, Bahador Bahrami & Ralph Hertwig, ‘Shared responsibility in collective decisions’, Nature

‘Research investigating collective decision-making has focused primarily on the improvement of accuracy in collective decisions and less on the motives that drive individuals to make these decisions. We argue that a strong but neglected motive for making collective decisions is minimizing the material and psychological burden of an individual’s responsibility.’

Mary L., Gray, ‘The hidden global workforce that is still fighting for an eight-hour workday’, Washington Post

‘Ghost work, the name our team at Microsoft Research has given to work done by this largely invisible labor force, flourishes at the dynamic boundary where human intelligence and technology meet. Computer software can schedule a ride, but a human must drive the car (for the foreseeable future, at least). An app can help you order your food, but only a person can make it up your four-story walk-up and identify your apartment number in a dimly lit hallway. And algorithms can suspect a Facebook photo is pornographic, but often it takes a person to know if a line has been crossed.’

Miriam Posner, ‘The Software That Shapes Workers’ Lives’, New Yorker

‘Modern supply-chain management, or S.C.M., is done through software. The people who design and coördinate supply chains don’t see warehouses or workers. They stare at screens filled with icons and tables. Their view of the supply chain is abstract. It may be the one that matters most.’

Author

Jack Orlik

Jack Orlik

Jack Orlik

Programme Manager - Open Jobs, Data Analytics Practice

Jack was a Programme Manager for Open Jobs.

View profile