Accuracy and ambition - why do we try to predict the future?
When we launched Nesta’s predictions for 2016, I wrote a short post reflecting on why we have produced these over the last five years. This is the first of three posts that try to answer the questions I asked then. We’re hosting a debate on the 2016 predictions on 21 January.
This post covers two questions: What are we really doing when we make predictions about the things we work on? And is the accuracy of a prediction more or less important than setting out an ambitious vision for what you’d like the future to look like? Later posts will cover remaining questions about the advantages and disadvantages of optimistic predictions and whether writing predictions would be a skill that more of us could benefit from having.
The authors of Nesta’s annual predictions are not just trying to accurately describe events in the coming year. For some predictions, accuracy may not be a motivation at all. They are written to bring attention to an ambitious vision of a future different from today. The tension between accuracy and ambition is woven into the history of the terms anticipation and prediction. But are inaccurate predictions failures or are they just a special category in the history of futures thinking?
The Medieval Latin meaning of prediction or “predictio” was a foretelling - a description of what will happen. University of Nottingham Professor of Science, Language and Society, Brigitte Nerlich’s quick look into the history of the words prediction and anticipation tells us that modern uses of “prediction” (at least in relation to science and technology trends) are often closer to the early meaning of anticipation.
The Latin root of “anticipation” includes an expectation to take care of something ahead of time. Anticipation includes the idea of action based on what could happen, rather than a description of what will happen. Nerlich connects it to “simulation”. A simulation is a model or for purposes of experiment or training. One definition of a model is “a way of thinking about the world we live in”. Prediction is now less strictly associated with foretelling the future, but more closely with developing scenarios for what the world look like in order to better take care of that possibility ahead of time.
From plausible to preferable futures
One of the common heuristics of contemporary futures thinking is a cone of light spreading out from today into the future. In our Nesta report Don't Stop Thinking About Tomorrow: A modest defence of futurology we imagined it as a torch beam - taking heavily from Joseph Voros’ version of the cone originally developed by Clem Bezold. The light beam is divided into probable, plausible and preferable futures (a distinction Voros attributes to an article from 1978 by Norman Henchley).
A different version is the “cone of plausibility” used in US Military long-term planning documents from the late 1980s onwards. There is a useful explanation and illustration in the brilliantly titled Alternative World Scenarios for a New Order of Nations from 1993. Inside the cone are versions of the future that continue recognisable economic, social, tech and political trends from today. Outside are the no less important but harder to predict wild card scenarios, like a political revolution or natural disaster. The cone is a device for thinking through the different ways that current trends could affect an organisation's role in the future. In this case, the author Charles W. Taylor develops four plausible future scenarios of interest to the military, from America as a world peacekeeper to a muted multipolar world. Each emphasises different trends while remaining in “an envelope of potential (not predictive) evolving societal configurations”.
Voros’ cone adds two things to Taylor’s. He divides plausible futures into a very narrow, bright beam of highly probable “business-as-usual” futures and others that diverge from today's trends. He also adds a cross section of preferable futures that reflect an individual’s judgements of what they’d like the world to look like. Preferred futures can be predictions. They are a special category of predictions: continuations of trends that are visible today, which people think should rather than just could happen.
The addition of preferable futures reminds us to distinguish the reasons we decide to look into the future. A military analyst has different motivations to, say, someone weaving stories about the future to sell their latest big idea. And Nesta’s predictions have made for an interesting case study of how we throw together different motivations into a single format.
Different kinds of preference
In the original post on Nesta’s predictions series, I mentioned we’d started to analyse the first four years of predictions according to the relationship between the writer and the thing they predict. Initially, I was interested in distinguishing two kinds of preferred future expressed in our predictions: championing things we were working on; and things we just think are cool. While trying to tag the predictions as either of these categories, it became clear that there is at least a middle category between the two - when we are interested in studying an area, but not necessarily funding and championing it. Each prediction was given a score out of five spread across these three kinds of preference (the green columns in this Google sheet).
Yet this division into three categories still didn’t feel like it captured the different kinds of attitudes towards the future expressed in the predictions.
Inaccurate but important predictions
Some of Nesta’s predictions are hugely ambitious but not very accurate. By the Latin definition of prediction, these would be failures. They fail to foretell the future well. However, when I looked into the predictions that fell in this category I wasn’t sure we’d call them failures. They were often anticipatory it in the sense defined at the top of this post: developing scenarios for what the world look like in order to better take care of that possibility ahead of time. These predictions are ones that the writer has a particular interest in, but not always preferred futures.
To pull out this subset of predictions, a score between +5 and -5 was given to both the actual change over a couple of years following each prediction and the ambition of a prediction. These are the yellow and red columns the same Google sheet or the collection of results in the bottom right quadrant of this chart of all of the predictions:
NB: several of the plotted values are scores for more than one prediction. The line shows the line best fit across all predictions.
Those with high ambition relative to their accuracy score below 1 in the blue column (accuracy/ambition) on the Google sheet. Here is a selection of these predictions (more detail on sheet 2):
They include ideas that Nesta wants to draw more attention to: frugal innovation in 2011, the idea of a commons for health knowledge in 2013 and the rise of a new kind of robot assistants in 2013 and 2014. They are not always things we are championing, but also things we think deserve more attention. The intelligent assistant software discussed in 2013 (Robot overlords) is partly a challenge to often glib or hyperbolic characterisation of artificial intelligence. The prediction of a real life Star Trek Tricorder in 2014 was more of a provocation about digital healthcare than an honest expectation.
Predictions as provocations
So in some cases at least, Nesta’s predictions are a models of the world created for the purposes of engaging others. When I asked Anab Jain, founder of design practice Superflux, about what value she sees in predictions, she was keen to distinguish prediction from the future visions she creates:
“We are not in the business of predicting the future, but are more interested in speculation as a means of making space for people to consider their choices, decisions and form opinions about potential futures, and hopefully be better equipped to engage in the creation of a more democratic future world.”
My argument in this blog post is that some of our predictions are much more closely aligned with this idea than with the idea of predictions written because the author think they will come true.
I also asked Brigitte Nerlich directly about how she understands the value (or otherwise) of these kinds of exercises in short term future-gazing. Her reply is worth repeating in full:
“At the beginning of this millennium sociologists started to study the way that the creation of expectations are used in biotechnology to, as Nik Brown put it "mobilize the future into the present" (Brown 2003), in particular in order to attract financial support and investment. Such expectations, which can be based on predictions or promises, have a performative force. "Expectations are part of the world of action: they incite, block, justify. This can be further understood in narrative terms: expectations help shape the plot (and its further development) that guides actions and interactions." (Brown et al., 2003) However, expectations, especially hyperbolic ones, which are necessary to gain traction for the future to be mobilised into the present, can and do fail and can lead to disappointment and disillusionment. There is a dynamics to expectations, their creation, use and impact that needs to be better understood in a variety of contexts. So it is important to subject predictions and promises, hopes and hypes, to some sort of quality control. This is of course difficult but being aware of the dynamics of expectations created through predictions is a step in the right direction.”
Nerlich warns that the hyperbole required for ambitious but inaccurate prediction also has its dangers - the potential to get people's hopes up unnecessarily. It is also possible that we make worse decisions, driven by hyperbole, than if no prediction had been made in the first place. Indeed, fresh from research that shows the success of some everyday folk in making correct geopolitical predictions, Philip Tetlock has been pushing for a new initiative that hold pundits to account for their dangerous speculations. Dan Gardner’s book based on previous Tetlock research asked why we still believe experts even if we know they are probably wrong. Danger lies not just in making pronouncements, but that this clarity is often more attractive than the messiness of what will likely lie ahead in the future.
This potential failing of very public, ambitious statements about the future will be explored in more detail in next week’s post on the third question: are we missing something by (mainly) describing optimistic futures rather than listing future challenges?
Image credit: Ironing drone by Max Cougar Oswald & Nihir on the Noun Project via Creative Commons