With several of Nesta’s 2018 predictions tackling the rise of intelligent machines, Celia Hannon looks at what humans can still bring to the equation.
It will come as little surprise that several of this year’s predictions for 2018 anticipate the impact of artificial intelligence on everything from healthcare to artistic practice. After all, this was also the year that speculation about automation tipped over into the mainstream, fuelling fears about the obsolescence of human workers.
Even forecasting itself is far from immune - a team at MIT recently used deep learning to create a computer program which can create a ‘plausible’ video of the immediate future on the basis of a still image. Now that machines are carrying out predictive analysis in fields as diverse as criminal justice, city planning and medicine no doubt we will be asked why, as fallible humans, we play the perilous game of prediction on a yearly basis.
But, in the case of foresight (as in many other fields) the old ‘humans vs robots’ story is a misleading one - the novel developments will likely come from the intersection between the two.
In his seminal book on the field of forecasting, Philip Tetlock acknowledged that well-validated statistical algorithms have repeatedly been shown to beat subjective human judgement. While that holds true in many cases, it bears repeating that we often want to explore questions well beyond the reach of statistical models. It is into this far messier territory which Nesta’s predictors venture forth (this year looking at the trends redefining the limits of the nation state, and how cities might adapt to drones).
Even where there is such a thing as a ‘right answer’ which could conceivably be found by a perfectly constructed model - it falls to humans to define the questions we want to ask about the future. This is where machines, acting alone, still fall short, a recent blog on the emerging discipline of collective intelligence from Geoff Mulgan highlights a few of the reasons why:
“Digital technologies have developed to be good at answers and bad at questions, good at serial logic and poor at parallel logic, and good at large-scale processing and bad at spotting non-obvious patterns.”
Relatedly, another limitation of machines is their reliance on past data to draw conclusions about the future. While good data is always an essential ingredient of any prediction, we can’t overlook the role of human creativity and imagination in envisioning alternative futures - the past may be a poor guide to the future in many cases (see, for example, Bob Dylan’s win for the Nobel Prize for Literature for his songwriting).
As we’ve discussed at length in previous years, neither are Nesta’s predictors simply in pursuit of accuracy (if that were the only goal, then the predictions series would make for very dull reading). A prediction may be of value because it ignites a debate about a below-the-radar trend or because it galvanises people to take action. Nesta predictors are working in the field as a researcher or practitioner (see Peter Baeck’s predictions on crowdfunding), and in some cases were working to accelerate the change they ‘predicted’.
Sketching out an idea of the future builds momentum - an idea which evolves as others build on it and integrate it into their own thinking. Writing about this Tim O’Reilly recently used the metaphor of map-making in his book What’s the Future?
“Finding our way into the future is a collaborative act, with each explorer filling critical pieces that allow others to go forward.”
All of which means that machines are not likely to usurp human forecasters any time soon. News which ought to come as a relief to this year’s crop of predictors.
Back in 2013, the Economist’s Adrian Wooldridge foresaw the coming of a ‘techlash’ against the untrammelled power of Silicon Valley’s plutocrats. Several of our predictions this year would suggest that this so-called ‘techlash’ is coming of age; as collective movements, regulators and others move decisively to tackle power imbalances.
Recent surveys of public opinion would also seem to support the idea that the tide of public opinion on this question is on the turn - with one survey this year finding that two-thirds of people are concerned that MPs aren’t taking sufficient action to safeguard against the impact of technological change. If the past decade has been defined by big players pioneering a tech revolution, we may now be seeing the seeds of counter-reformation.
As part of this year’s series, Alice Casey and Peter Baeck reflect on a year which saw policymakers and workers push back against the gig economy and the business models which power the likes of Uber and Airbnb. They expect that 2018 will see increasingly mature experiments in new models of owning and organising - from the rise of ‘WorkerTech’ for underrepresented workers to platform cooperatives designed to distribute the value created by platforms more equitably. Meanwhile, Katja Bego sounds the alarm about the sustainable environmental cost of our love affair with digital technologies, predicting an uptick in movements to ‘green the internet’ in the year ahead.
Looking more broadly, Geoff Mulgan sees governments moving to regulate AI, in recognition of the need to tackle the "huge asymmetry between those using the algorithms and those whose lives are affected". Chris Gorst believes a trend which first started with ‘open banking’ will spread to other sectors as regulators will increasingly wake up to the power of putting data back into the hands of consumers.
Making a prediction is also an opportunity to look at the undesirable - or unintended - consequences of a trend before it becomes part of the ‘new normal’. In her prediction, Lydia Nicholas anticipates a privacy backlash against the proliferation of machines which are capable of reading our emotions via facial signals. How comfortable will we be with these tools in the hands of more or less scrupulous advertisers and employers?
While we’ve gradually come to accept (if not necessarily like) the fact that Google knows what we search for, who we communicate with and where we go - are we really ready for it to take ownership of our health data? John Loder highlights a scramble by tech giants to establish first mover advantage in developing algorithms for health. He envisages a moment in the (not-too-distant future) when an Apple or Google buys a healthcare provider to capture higher-quality data to train their algorithms. In common with so many other fields, the benefits of predictive analytics are appealing but the risks (on issues such as patient privacy) are also real.
While the rose-tinted glasses may be off, the outlook on 2018 from our predictors is far from gloomy. Also featured in this year’s series is a duo of pieces exploring the idea that machines and simulated models have the potential to augment capabilities we think of as intrinsically ‘human’ - such as creativity and judgement. Florence Engasser and Sonia Tanna look at this from the perspective of policymaking and simulation, while Georgia Ward-Dyer considers the case of creative co-production between artist and machine.
The potential of hybrid models to augment human intelligence holds equally true in the case of forecasting. Reflecting on this, Tetlock has also argued that we will need to: “Reframe the man-versus-machine dichotomy, combinations of Garry Kaspararov and Deep Blue may prove more robust than pure-human or pure-machine approaches.”
While Nesta’s own predictions are not the product of algorithms, neither are they pronouncements of lone visionaries. The authors have looked to signals emerging from work with colleagues and collaborations with partners, grantees and researchers. Above all, we’re most interested in opening up a conversation than being right (or at least when we get asked how accurate we were this year, that’s the story we’ll be sticking to…).
That’s why we want to know your predictions for 2018. What do you think might happen? What do you want to happen?
Please tell us on Twitter or give your views in the comments section beneath this blog.
Illustration by Peter Grundy.