How can the fourth industrial revolution be made good?
The fourth industrial revolution (4IR) – the convergence and interpenetration of digital technologies, bio, nano, info and things - promises great benefits, from advances in mobile money and energy, to homes and healthcare, for example combatting cancer. Large scale deployment of digital, AI and automation should give us big gains in productivity, which should mean more prosperity, a bigger cake for everyone to share, as well as many social gains.
These are just some of the reasons why we are now seeing feverish excitement among investors, entrepreneurs and governments, with AI, and the 4IR, centre stage in industrial strategies from China to the UK.
Unfortunately this isn’t a straightforward, or straightforwardly good revolution. As many observers have pointed out, the 4IR risks widening the divide between vanguards and the rest; accelerating job destruction ahead of job creation; and introducing potentially serious threats to personal privacy and cybersecurity.
The first industrial revolution probably did more to benefit humanity than any other event in history (certainly as measured by its effects on life expectancy, income, and freedom)
But in its first decades it also did huge harm, driving millions into poverty, ill-health and vulnerability to crime in cities like Manchester and Chicago. Only when complementary innovations (like sewers), social innovations (like welfare states) and institutional innovations (like democracy) came along, were the benefits widely spread.
The 4IR risks repeating some of the mistakes of the first. Some of these risks reflect fundamental distortions in how the revolution has been shaped. Much of the investment in the underlying technologies of the 4IR has been driven by the military – seeking better methods for surveillance or warfare – rather than better ways of meeting human needs.
All of the key technology fields that are now proving so exciting have been primarily driven by military or military related investment: machine learning; computer vision; robotics; and what we now call the Internet of Things (the one partial exception is natural language processing).
Meanwhile, although a good proportion of the new applications in business provide useful advances in mobility and efficiency, too many are more intriguing than useful. Examples like refrigerators that warn you when you need to buy more milk are emblematic of a technological revolution that risks being diverted into purposes that are either trivial or harmful.
Work with the World Economic Forum (through the Global Future Council on innovation and entrepreneurship) has suggested four fundamental shifts that are now overdue to make this a more useful revolution:
The first is a shift in ends and purposes. There’s an urgent need to redirect investment in the technologies of the 4IR more towards the most important human needs, including healthcare, mobility and education, rather than warfare and advertising. Making labour markets work well; helping refugees integrate into new societies; or reducing crime. These are all promising areas for investment that have so far had only small crumbs of funding by comparison with other fields like optimising recommendation engines or guiding missiles. Nesta’s investments in companies using AI for education or jobs are good examples. Making this shift will be good for society; but it will also mean fewer failed investments for business too.
The second is a shift in means and participation. The 4IR is largely being shaped by small groups of people in big companies, a few governments and universities. The rest of the population are observers. We need, instead, to open up the 4IR to millions of entrepreneurs, innovators, makers, and citizens, and use new tools to make it easier for them to shape this revolution. There are many examples of how this is being done well, from Nesta’s Longitude Explorer Prize (which backs 11-16 year olds with Internet of Things innovations) to the hundreds of maker spaces around the world.
The third is a shift of ethos, to humanise the 4IR. The 4IR technologies don’t only risk making many people redundant, literally, they also risk amplifying the worst sides of human nature - as has happened already with social media, which, at times, reinforces tendencies towards aggression, addiction and compulsive behaviours. We need to multiply applications that do the opposite and reinforce our dispositions to cure, care and relate. We need different ethics and aesthetics for technology to make them more engaging, and more emotionally intelligent in ways that are reciprocal not manipulative.
The fourth is a shift to take seriously the need for complementary innovations. Some of these will emerge in the field of regulation (like the many forms of anticipatory regulation we are developing); some will be social innovations (like new approaches to data of the kind being experimented with by Decode); and some are institutional innovations (like the Machine Intelligence Commission we have proposed).
There are many good examples of initiatives that embody the different spirit described here – in fields as diverse as farming and mental health, finance and care. But these remain, generally, small scale.
If these shifts don’t happen people will understandably come to see the 4IR as a threat. That happened to many technologies in the past, from nuclear power to genetically modified crops. Their advocates assumed that the world would be grateful for technological breakthroughs. But too often they failed to ask basic questions about who stood to benefit, and who faced risks. If that happens to the 4IR, regulatory and policy barriers will block deployment of next generation technologies.
To avert that risk, these four shifts should also become tests, certainly for any public funding going towards the 4IR, but also for big firms’ own investments. Are they contributing to outcomes that really matter? Are the methods being used inclusive? Will the results enhance the best of humanity not the worst?
4IR offers an explosion of tools for intelligence. In cultivating them we shouldn’t suspend our own intelligence, and forget to ask the questions that matter most.