On the 30th of March of 2016, Tata Steel announced that it was closing down its steel-plant in Port Talbot, Wales, and making its 4000 workers redundant. The reasons for this were ‘high chinese steel imports, costs of energy and weak demand”. This decision threatened the economy of this city because it was dominated by a single industry - steel. Port Talbot’s economy wasn’t resilient.
Why did it come to this? Why didn’t we manage this process better, identifying and addressing the risks being faced by the existing industries in Port Talbot, and developing new ones?
We seem to, after all, manage quite well other pretty complex systems such as energy, telecommunications or (to a lesser degree, I live in Brighton and commuting to London in recent months has not been easy), transport. Information systems and data help us manage this complexity. Why can’t we do something similar with the economy?
In my FutureFest talk on the 17th of September, I talked about the idea of managing growth with better technology and data. Past visions for this, why they didn’t work, Nesta’s work in this area today, and future prospects.
1971 is an interesting year for the idea of managing the economy with data.
In 1971, the Twenty Fourth Communist Party Congress authorised OGAS, the nation-wide automated control system. This project would create a national network to collect data about the Soviet economy, and help to manage it. This initiative was led by Viktor Glushkov, a mathematician at the Cybernetics Institute in Kiev. The idea was that a network of computers in factories across the country would collect data about production. This data would then be analysed to determine who needed to produce what, and where. This would reduce the burden of managing the economy and solve the ‘economic calculation problem' identified by Austrian economist Friedrich Von Hayek: the fact that, without markets and prices, a command and control economy struggles to allocate resources efficiently.
Also that year, Chile’s president Salvador Allende had a meeting with Stafford Beer, a British cybernetician who convinced him to support project Cybersyn, a collaboration between British and Chilean mathematicians, engineers and policymakers to design a system to manage the publicly owned part of the Chilean economy, which was growing with this country’s transition into Socialism.
Project Cybersyn consisted of several sub-projects: Cybernet, a telex network connecting factories with each other, and with directors at the agency of industrial development (CORFO), Cyberstride, a software package to model the operation of factories and identify industrial problems, CHECO, a system to model the Chilean economy and experiment with new economic policies, and Opsroom, an ‘operations room’ where policymakers would get together to check all this data and decide actions. There was even a ‘Cyberfolk’ project to instal meters in the houses of a sample of the population to measure people’s happiness with the state of the economy.
None of these projects came to fruition.
OGAS was watered down by Soviet agencies which feared they would lose control if it became a reality. The network became a patchwork. Project Cybersyn was cut short by Pinochet’s coup in 1973.
It is fair to say that even if these political events hadn’t interrupted the projects, they would have struggled to deliver on their vision because, frankly, the technology and the data weren’t there. As Cosma Shalizi points out in a blog about Red Plenty, a Francis Spufford book about cybernetics economics in Russia, the calculations required to optimise the Soviet economy would have taken hundreds of years to run in the computer hardware available in the 1970s.
Why not now?
The situation has changed a lot since the 1970s. Computer power has expanded at an incredible rate. According to Benjamin Peters, today’s computers are a trillion times more powerful than those the Soviet union had access to. We also have better networks, more data, and better algorithms. We also have social media data that perhaps could be used to ‘measure the happiness’ of society, a bit like Project cyberfolk.
Here at Nesta, we are exploring what this explosion of data means for innovation policy - the set of policies that support the creation of new ideas (for example by funding scientists and entrepreneurs), their combination (by connecting people in different disciplines) and their adoption (e.g. by encouraging businesses to invest in new techologies).
We are particularly interested in three things:
Using web data to identify new technologies and industries to support - for example, the chart above shows levels of tech event activity in 3 ‘hot tech topics’ - bitcoin (blue), deep learning (green) and VR (red). Policymakers can use this information to find ‘trending’ tech areas and communities of innovators to talk with and support. Our vision for this is almost like a Google Trends/search engine for new tech.
Using social media and open data (e.g. Twitter etc.) to map networks of collaboration. The chart above, based on data about projects funded by UK research councils in the area of Environmental Science, show us what Welsh organisations are collaborating with each other, and where there are gaps in the network. Our vision here is to create ‘innovation social graphs’ that can be be used to navigate these networks and generate recommendations to collaborate, along the lines of what twitter or LinkedIn already do.
Using interactive data visualisations where people can ask their own questions from the data, and make the data open so other people can analyse it. We recently published The Geography of Creativity, a report about the location and growth in the creative industries in the UK. Together with this, we published data and interactive data visualisations that anyone can use to explore the situation. For example, an economy like Port Talbot could look at the situation of its creative industries to see if this a sector it would like to support in order to diversify its economy.
We are just getting started with this. What will the future will like? Will it be a Minority Report style scenario where innovation policymakers are able to predict when someone is going to have a good idea and intervene to make this happen faster/better, or nudge people into connecting with others that would make them more innovative? Will we be able to automate innovation policy and solve growth?
There are three reasons why this automated, real-time vision of innovation policy is unlikely to be fulfilled any time soon...and if it happened, it would probably be bad.
The Donald Rumsfeld problem: The first one is that innovation policy is about completely new stuff, or ‘unknown unknowns’, in Donald Rumsfeld’s words: scientists, technologists and policymakers come up with new ideas that didn’t exist before, and try to bring them to reality. This is a process full of serendipity and ‘radical uncertainty’, which means it is difficult to use data and models from the past in order to predict what will happen in the future - what new technologies will create value, and where? The Internet was a network for academic collaboration, now people use it to Tinder and Snapchat. GPS was a mapping app that has ended up being used to play Pokemon Go! If we only based innovation policy on data and algorithms, without creativity and intuition, we would by definition be unable to support transformative, ‘disruptive’ new ideas.
The Nathan Barley problem: The second reason is that premature policy interventions can skew the decisions of scientists, technologists and entrepreneurs, who often ‘tend to follow the money’, a bit like Nathan Barley with his geek pie haircut. It is perhaps good to have a lag between innovation activity ‘on the ground’ and innovation policy to support activities and remove barriers, in order to make sure that those new ideas that are eventually supported have ‘legs’.
The Hari Seldon problem: Third and last, innovation and economic policy aren’t purely technical matters, to be solved like psycho-historical equations by 'high tech economist' Hari Seldon in Isaac Asimov’s foundation. They are also political. They concern the future society and economy we live in. Are we happy to embrace cryptocurrencies that enable anonymised transactions, or are we concerned that they will enable too many illegal transactions and tax avoidance? Do we want to automate all jobs and spend our lives in leisure, or do we think that working is intrinsically important for us as humans? Do we think that social interactions in a Virtual Reality are the same as physical interactions? Are they better? Are they worse? Where do we draw lines as a society? Those decisions can’t be automated, or left to data scientists. They need to be answered as part of a political process, and through a democratic debate, which is precisely what FutureFest is about.
So as you see, there is a lot of uncertainty about how far we want to go using data to inform innovation data. This is the reason why we are running experiments like Arloesiadur, an analytics collaboration with Welsh Goverment, to discover where the opportunities are, what are the limitations of these methods, and how they can be adopted in a way that perhaps won't solve growth, but will hopefully help us to make innovation policy more effective. We’ll keep you posted about what we find.
The image for the blog was created by Chameleon Design, at the Noun Project.