Making the world safe for disruptive technologies – lessons from the Hundred Years War

Today I was invited to speak to senior civil servants at HM Treasury’s Policy Excellence Group about ways to tackle the productivity puzzle. This is a version of what I said.

I’m going to talk today about how to increase productivity, and in particular how the UK can encourage innovation, which has historically been responsible for about two-thirds of economic growth.

But rather than recommend a bunch of policies that you’re no doubt already well aware of, like public support for R&D, tax credits or Catapult Centres, I’d like to start somewhere entirely different: fourteenth century England.

One of the delights for any English schoolchild reading about the late Middle Ages is the thumping victories the English (together, significantly, with the Welsh) scored over the French. At Crecy, at Poitiers and at Agincourt, the swaggering French nobility in their fine armour would bear down on the English army, only to be humbled by swarms of longbow arrows, fired at a devastating rate by the stout yeomanry of England (or, more often, of Wales – but that tends to get left out in the telling).

Now, as well as being a good yarn for bloodthirsty schoolkids, this contains an interesting lesson about technology and how it works. The longbow was certainly a very effective military technology (for example, you can shoot more arrows much faster with a longbow than with a contemporary crossbow). But in order to use the longbow, the English had to do much more than just invent the things* and hand them out.

A new paper by Douglas Allen and Peter Leeson asks why the English fielded armies of longbowmen that were demonstrably so effective, whilst their opponents, the French and Scots, didn’t, despite losing time after time to armies of longbowmen. Allen and Leeson’s paper focuses on the institutions the English needed to have in place to make an army of longbowmen feasible.

While you can take issue with some of their conclusions, it’s pretty clear that to adopt the technology of the longbow, England needed to make a range of complementary investments (such as the life-long training that longbowmen required). It also required a range of social institutions to make these investments possible (the social structures to keep a lot of martially-minded non-noble men who knew how to use dangerous weapons out of trouble in peacetime).

So in short: England didn’t win the Battle of Agincourt because it invented the longbow or because the French didn’t. It won because it adopted the innovation of the longbow successfully, and that to do that it needed to make lots of complementary investments, and to have a set of institutions that allowed those investments to be made.

The story of the longbow has some important implications for economic growth in the twenty-first century.

It’s widely agreed that innovation drives productivity growth. Nesta’s Innovation Index suggested innovation in its broadest sense was responsible for 2/3 of UK economic growth; other studies have found something similar.

Specifically, adopting innovations and making use of them is what matters. Consider the US in the 1990s, where the adoption of computers by the retail industry and the changes they wrought to supply chains had a far greater effect on productivity than the growth of the computer industry itself.

We know that to adopt innovations effectively, the economy needs to invest not just in scientific knowledge (R&D), but also in other intangibles: things like product and service design, software development, organizational change, and even marketing. (Indeed for every pound UK businesses spend on R&D, they spend £8 on these other intangibles.)

There are number of things that might hold firms back from investing in intangibles that relate to innovation. Let’s consider two important ones: finance, and the right institutions.

Perhaps it is obvious that for firms to invest in intangibles, they need finance. To the extent that some of these firms will be new/high growth firms (which you’d expect, to the extent that Clayton Christensen’s innovator’s dilemma makes it psychologically hard for incumbents to adopt radical innovations), many of them will not be able to finance innovation investments from retained earnings.

So you need the right financial architecture, with a good mix of risk capital and growth capital. We know from the UK and from other countries from Israel to the US to Finland that government can play a role in creating this architecture. Lead customers who are willing to put in orders for novel products can help too; in some cases this lead customers may be the government, so make sure government is a smart procurer of innovations. I won’t dwell on this, since it’s mostly pretty well known.

What’s perhaps less obvious is the importance of good institutions. They clearly played in role in the ability of the late medieval English to adopt the technology of the longbow. So what institutions do we need to encourage the adoption of innovation by firms in Britain today?

Part of the answer will be familiar from Economics 101– overzealous regulation is clearly to be avoided. There’s evidence to high levels of product and labour-force regulation discourage investment in intangibles at both a country and sector level. If you protect incumbents’ business models, firms will be slower to invest in innovation. So the first and most obvious bit of institutional advice is: don’t deliberately regulate innovation out of existence, and don’t tolerate monopolists and cartels.

But that’s just part of the challenge of good institutions. Trust and consent is also important. If we want innovations to be broadly adopted, we need people to trust them. GM crops provide a good example of what happens if this trust is mishandled. This requires good rules and good social norms.

Consider machine learning and data analytics. It’s widely believed that these technologies can change many industries for the better, from health care to call centres. But using large-scale data analytics in healthcare is problematic. What data should people be allowed to share? Who should be able to use it and how? What privacy rights should people have? (For discussion of more of these questions, check out Nesta’s blog series In the Shadow of the Smart Machine and our event Computer Says No!)

These questions about how technologies can be used are not just idle speculation – if we don’t come up with answers for them, people aren’t likely to trust or accept the use of big data in health. If people don’t trust it, it’s not unlikely that sooner or later they’ll prevail on their representatives to pass laws severely restricting it (as many countries have done with technologies from ride-sharing to genetically modified crops). In the mean time, businesses will be wary of investing. A lack of social buy-in creates risk for would-be investors and adopters of a new technology.

This means that, all other things being equal, a country that can have a mature debate over how important technologies are used, and can come to terms with the trade-offs involved, is likely to be better at adopting radical and disruptive technologies than countries that can’t have these conversations. (This may be true even if the regulatory regime in the latter countries is superficially more liberal – because who’s to say that a liberal but unpopular regulatory regime might not be tightened in the future?)

The UK Government has a good record in public engagement exercises on issues around emerging technology, and a new Royal Society policy study is likely to include similar activities on machine learning. But given the level of business interest in machine learning, there could be an opportunity to take debates inside the development of new products and services - to test out ideas for products that can evolve quickly in response to users. So far we have accessible visualisation of machine learning systems or art installations that show the structure of algorithms, but these attempts at public transparency are far away from a social license to operate for the most exciting technologies around today.

An example of how this can go right is the British experience with new reproductive technologies and stem cells (so-called “red biotech”). In the words of Jack Stilgoe, “the UK held early discussions among ethicists and public groups and constructed a well-respected regulatory regime which has meant that the science and technology have been able to proceed with confidence and public trust.” The counterexample is GM crops, where the technologies have faced significant public resistance.

Who gains from innovations may come to matter to. It’s widely speculated that some emerging technologies may destroy large numbers of jobs – the so-called robot revolution (although it’s much more to do with ICT in a general sense than what any normal person would think of as robots).

If this is true, people may rapidly come to oppose new technologies if it’s perceived that the benefits are too unequally shared. Latter-day Luddites may not need to break machines: if there are enough of them, they can use their democratic power to get the machines banned.

So another part of the adoption puzzle is creating the right social institutions to share the gains of technologies. At the moment, we largely do this through taxes and benefits.

But it’s not impossible that in the future we might want to do more. Finland’s basic income experiment is being promoted as a possible answer to technologically-induced unemployment (though it’s unlikely we’re anywhere near this state yet). In the nearer term, government may want to invest heavily in skills training and career services for those whose livelihoods are hindered by technology – perhaps we should be thinking in terms of a kind of 21st century version of Germany’s Hartz reforms, in that they’ll increase the flexibility of the work force in a way that increases resilience and future productivity.

So the overall message is: if you want growth, you need widespread adoption of innovation. This requires finance of various sorts, but more importantly it requires the right institutions: competitive markets and open-minded regulation, to be sure, but perhaps more importantly an open dialogue on the risks and opportunities of innovation, and a willingness to engage with both the winners and the losers of technological disruption.

The ethics of machine learning may seem a long way from the fifteenth century English yeomanry and their bows. But there is one constant: for a society to profit from technology, it need good institutions.

 

* In fact, it’s pretty clear the English pinched the idea from the Welsh, and archaeological evidence suggests that the idea of making bows that are very long dates back to prehistory.

Author

Stian Westlake

Stian Westlake

Stian Westlake

Executive Director of Policy and Research

Stian led Nesta's Policy and Research team. His research interests included the measurement of innovation and its effects on productivity, the role of high-growth businesses in the eco…

View profile