Five years ago, you would have reacted with justifiable scepticism to suggestions that your gut microbiome could hold clues to treating health conditions, or that machines could handle your divorce and competently assess your child’s academic performance. This year, Nesta’s predictions series is a perfect illustration of an old adage of futurology, that the absurd has an unnerving habit of becoming the ‘new normal’ seemingly overnight.
While it can be (relatively) straightforward to anticipate the path of progress for a specific technology, we find it much harder to imagine deeper shifts in cultural values and human behaviours, partly because doing so involves challenging our own internalised assumptions. So experts might have been able to predict the evolution of mobile technologies and 3G, but it was far harder to foresee the rise of services built on those platforms like Tinder and the changes they ushered in for social mores around dating.
So if a ‘prediction’ or statement about the future doesn’t have a hint of outlandishness which means it feels foreign to us now, then it isn’t serving one of its primary purposes - which is to generate alternative visions that can change our perception of the present. James Dator, an Emeritus Professor of Futures Studies, even codifies this principle in one of his three ‘laws of futures’, arguing that: ‘Any useful idea about the future should appear to be ridiculous.’
Nesta’s predictions series this year features a raft of technologies and practices which would have once been dismissed as pure science fiction, but which are now tipping over into mainstream acceptance. Take exams, that anxiety-inducing cornerstone of the education system. Toby Baker and Laurie Smith point to recent advances that make continuous assessment of students in real time by artificial intelligence not only possible, but practical on a large scale. While these developments come with a host of ethical considerations, they argue that AI tools could transform what we assess as well as how we assess - potentially leading to a broadening out of measures to include analysis of teamwork or problem solving skills.
In their piece exploring China’s efforts to build AI-enabled ‘city brains’ capable of managing entire urban systems, Geoff Mulgan, Eva Grobbink and Vincent Straub draw parallels with the 1960s space race between the USSR and US. The USSR’s launch of the Sputnik 1 satellite in 1958 notoriously sparked a crisis of confidence for America and prompted JFK to make the extraordinary claim that the US would put a man on a moon by the end of the 1960s. This speech famously spawned the term ‘moonshot’ to refer to ambitious missions, which, if achieved, redefine what is possible or indeed ‘normal’.
While the concept of a omniscient city brain sounds like it could have been dreamt up by George Orwell or HG Wells, it exemplifies a recurring theme from this year’s predictions: the integration of artificial intelligence into the fabric of our everyday lives. In his prediction, Matt Stokes considers how this brave new world is also generating new forms of uncertainty and anxiety for individuals. We would once have taken it for granted that asking our bank for a loan or applying for a job meant interacting with a human. But, as more and more decisions about our lives are controlled or mediated by opaque algorithms, Matt expects that it will become routine for us to demand the right to know when we’re talking to machines such as chatbots, and when a human is in the driving seat.
Also engaging with the darker side of the new normal, Caroline Purslow and Daniel Berman look at the growing numbers of people now confronting the reality of living with chronic infections caused by antibiotic-resistant superbugs. They predict that 2019 will be the year that this surge starts to hit home and manifest itself in our personal lives, our communities and our health system.
In their prediction, Teo Firpo and Laurie Smith observe that breakthrough innovations (like the idea of infecting people in order to vaccinate them from the same disease) often run contrary to received wisdom or logic. Their prediction for 2019 will also strike some as counter-intuitive: if innovation funding was more random could it actually be more effective? As a response to the bias and time-consuming nature of lengthy peer-review evaluation systems, they predict more funding bodies might start experimenting with partially-randomised approaches: where promising research applications over a certain threshold receive funds on the basis of a lottery.
Katja Bego also questions received wisdom in her prediction on the next stage of the post-truth era. While we’re used to routinely treating visual evidence as synonymous with truth (after all ‘seeing is believing’), the rise of deepfakes could undermine our faith in material we view with our own eyes. A dramatic lowering of the barriers to entry (a deepfake of a well-known individual or politician can now theoretically be created by anyone with a consumer-grade computer and basic tech skills) combined with a febrile, polarised political environment leads Katja to warn that we should be braced for a deepfake to spark a very real geopolitical incident.
Two of our predictions anticipate developments which are, perhaps, long overdue. While we’ve grown accustomed to the frenetic pace of technological change, Charlotte Macken points out that high R&D costs and a difficult path to market have meant assistive technologies which support mobility (like wheelchairs, which date back as far as 525 AD) have often been much slower to develop. With the advent of the Fourth Industrial Revolution she predicts that this could be about to change - as technologies such as eye-gaze control, AI and sensors pave the way for more intelligent mobility solutions. But as ever, this raises the question of how equitably the fruits of such a revolution will be distributed.
We now, rightly, tend to ask harder questions about who stands to benefit from technological innovation - a theme of last year’s predictions for 2018 was the ‘tech-lash’ against new power imbalances. A case in point is the gig workers of the platform economy, who have been at the sharper end of changes to our labour market in recent years.
In the ‘End of the week as we know it’ Georgia Ward Dyer considers this as just one of numerous trends which are converging to undermine the concept of the five day, nine-to-five institution which we take for granted (but which is actually a relatively recent 20th Century invention). The TUC is now calling for a four-day week and many workers with the autonomy to do so are re-organising their working week to make more time for creative pursuits, volunteering or caring. Of course - these encouraging developments have to be set alongside the darker, more precarious experience of those juggling several jobs over the course of a never-ending working week. Georgia argues that unless we move to modernise our system of workers rights and protections, far from liberating the 21st Century worker, this trend could serve to entrench pre-existing inequalities.
As with several of this year’s predictions, we are approaching an important tipping point on this issue of work / life balance - a moment for reflection before people simply adjust to the new reality without asking difficult questions about who loses and who benefits. Far from setting out a single ‘accurate’ vision of how the future will be, most futures tools and methods are simply ways of engaging with this kind of uncertainty. Whether it be Robolawyers or personalised management of our gut health, with our predictions series we’re looking at that particularly uncertain moment just before an idea, technology or trend makes the leap from the realm of the ‘ridiculous’ to the ‘normal’, when there is still scope to redirect its path.
As ever, let us know what you think about this year’s crop of predictions and tell us yours for 2019.