We are building a formidable system for measuring science - but what about innovation?

'Our job is to create new connections, but how do we prove their value?'

'Measurements are short-term but our impacts are long-term'

'The way we are measured discourages us from taking risks'

These are some of the things you are likely to hear if you spend time talking to people running programmes and interventions to support innovation. What these innovation practitioners are saying is that the metrics used to evaluate them are not the right ones, and that this makes their jobs harder.

But what would be the right metrics? And how do we get them? Are there any downsides to measuring things that are currently below the radar?

Here, I explore these questions by first looking at the situation in the adjacent domain of scientific research, where I argue that a new wave of data about scientific activity (scientometrics) is making it easier and easier to measure and track scientific inputs, processes and outputs with an unprecedented level of detail and timeliness.

Unfortunately, things have not moved as fast in the field of innovation (where new ideas are applied by businesses, government and the third sector), and, as a result, our understanding of how innovation happens and the best ways to support it remain underdeveloped. I outline the reasons why things are different in science, and sketch strategies to generate better data about innovation, helping our measure of innovation (innometrics) catch up with scientometrics [1].

Creating a science of science policy

I was inspired to write this blog after attending several conferences and seminars over the last few months where I was impressed by the volume and quality of data and the sophistication of methods that scientometrics scholars are deploying to answer big science policy questions.

At the OECD Blue Sky forum in September last year, I saw very interesting presentations about scientific collaboration and researcher mobility. I was amazed by the Atlases of Knowledge produced and curated by Katy Börner and collaborators, who use advanced analytics and information design to create detailed maps of scientific research and the connections between different research domains.

See, for example, the map below (based on this paper). It displays the relationships between sub-fields of medical sciences based on an analysis of over 2 million papers. Researchers and funders can use these maps to navigate vast oceans of science, and identify potentially interesting papers and connections between fields.

Source: Skupin, Biberstine, and Börner (2013)

At an expert workshop organised by the European Commission in Brussels, I learnt about RISIS, the Research Infrastructure for Science and Innovation Studies, being developed by a consortium of European researchers with support from the Commission. RISIS is a set of linked datasets and a secure lab for the storage and analysis of microdata about scientific activity.

Only a couple of weeks ago, I attended FuturePub, a meetup organised by sci-tech company Digital Science, where their CEO, Daniel Hook, presented 'The Connected Culture of Collaboration', a report with a new analysis of scientific collaboration based on data from Overleaf, a platform for collaborative scientific writing and publishing.

The beautiful graph below, taken from that report, shows connections between institutions in different countries ('Red' denotes North American countries, 'Blue' European ones, 'Purple' South American, and 'Yellow' West Asian).

The authors point out that the high levels of collaboration between European countries are in part driven by EU research funding, which encourages researchers across Europe to work together. You could imagine data and visualisations like these helping to answer big policy questions, like the impact of Brexit on the UK’s position in international research networks.

Source: Calvert and Hook (2017)

I have benefitted from the new wave of scientometrics in my own work. Gateway to Research (GtR), an open dataset into projects funded by UK research councils and innovation agency Innovate UK, is an important input into Arloesiadur, the innovation data dashboard we are developing for Welsh Government.

GtR contains detailed, linked information about tens of thousands of projects and  organisations. We are analysing these data using machine learning, natural language processing, and network science in order to track the emergence of research topics and technologies, identify those areas where Wales has a comparative advantage, and detect new opportunities for collaboration between Welsh researchers.

The graph below shows a preliminary network of research topics based on these data. In this chart, research topics which appear often in the same projects are pulled closer together. The colours represent the broader scientific disciplines where each topic sits.

Although scientific disciplines are easy to see, these are far from siloed. In particular, applied and practical research topics like 'media_design_creative', 'management', 'process_mechanical_engineering' and policy applications of environmental and life sciences research, seem to provide bridges between disciplines, suggesting that these projects create value by connecting disciplines to address practical problems, in addition to providing solutions to those problems.

Innometrics are lagging behind scientometrics

Unfortunately, our ability to measure the innovation processes through which new ideas (including those generated by scientific research) are applied has not expanded and improved at the same rate. Hasan Bakhshi and I set out the problems with the data status quo for innovation policy in a working paper presented at the OECD Blue Sky conference last year, so I will only skim through them now:

Much current innovation research relies on innovation surveys, such as the Community Innovation Survey. These (high quality, to be clear) surveys suffer from small sample sizes and hard to compare responses, they lack detailed information about business collaboration and trade, and cannot be used to identify individual businesses. They are not so useful for researchers and policymakers looking for detailed information about specific locations, industries and networks, or individual businesses to engage with (rather than sectoral aggregates or averages).

Patents can plug some of these gaps, but only a miniscule share of businesses in science-based and tech-intensive sectors patent. Less than 1 per cent of respondents to the UK Innovation survey say that patents are highly important for protecting their innovations.

Micro-level administrative data about company financial performance are increasingly available through secure data labs generally maintained by National Statistical Agencies. Unfortunately, these data tell us a lot about company performance but not so much about innovation.[2] As before, they lack information about business networks and are (understandably) anonymised.

What explains these differences?

We have better data about science than innovation for two simple reasons: scientific activity is easier to measure, what is measured is shared more openly, and what is shared is easier to integrate to get a more unified view of the science system.

Let’s go through these two things in turn.

1. Regarding ease of measurement: Although few people would argue that science is a simple system (if in doubt, go back to the graphs above), its outputs are less varied than is the case with innovation.

To put it crudely, academic researchers generate papers.[3] By contrast, innovation involves new products, services, processes, business models, ways of organising, and ‘soft’ (e.g. artistic) innovations that differ greatly across industries. There is not a single database like Scopus where one can go to look for information about these innovations.  

The high visibility of citations, the currency of science, also makes it easier to map collaboration and influence networks in academia than in industry, where many, if not most, flows of information and people leave no paper trail, or leave bits of a trail in many separate or proprietary datasets such as the customer relation systems that businesses use to keep track of sales and purchases, or professional networks like LinkedIn.

2. This brings us to another feature of the scientific system driving the new wave of scientometrics: openness. Science is funded by a small number of (primarily) public and third sector organisations who collect vast amounts of operational data about ‘inputs’ (funding, characteristics of the scientific workforce, descriptions of projects etc.), and are increasingly making it open to lower barriers to knowledge access and re-use. Papers are also getting easier to obtain outside journal pay-walls.

The situation is very different in the case of innovation, where government has been slower in releasing data about participants in innovation programmes, and companies with valuable innovation data have few incentives to release it.

3. On integration: the science system has made big strides in developing and adopting unique identifiers that make it possible to connect information across databases: this includes the Digital Object Identifier used to identify content (e.g. papers), ORCiD, a persistent digital identifier for researchers, and GRID, a catalogue of the world’s research organisations.

Thanks to this metadata infrastructure, scientometrics researchers can smoothly pull together information from many different sources to get a comprehensive view of the science system. Such global standards are less developed in the innovation domain. For example, the Global Legal Entity Identifier (GLEI) to uniquely identify businesses has just over 30,000 registered companies in the UK, a miniscule proportion of the UK business population.

Helping innometrics catch up with scientometrics

This stuff matters. As things stand today, there is a big risk that innovation policy decisions are based on the wrong data, and that bad metrics expand to fill the void created by the absence of good ones, potentially creating a topsy-turvy world where successful programmes appear to have failed and vice versa.

Unsurprisingly, poor innovation data also slows the use of big data, data science and Artificial Intelligence (AI) methods in innovation policy and practice. By contrast, bigger and better scientometric datasets are powering a burgeoning sci-tech scene including Digital Science (who I mentioned before), as well as other startups like Benevolent AI, Meta or Yewno, who are all using AI to enhance scientific discovery and collaboration.

And to make things worse, the mismeasurement of innovation creates negative spillovers in science policy, because low quality innovation metrics make it harder to measure the impact of public investments on science, increasingly based on the prospect of economic growth and jobs. There is even the risk that lack of information about the subtle ways in which scientific research relates to innovation might lead policymakers to focus on cruder measures of scientific impact, such as spin-outs or Intellectual Property licensing from university to industry. There is the risk that like the proverbial drunk looking for his keys where the light is, we end up focusing on those scientific impacts we can measure more easily, instead of the ones that matter.

This will not do. We know that innovation is vital to improve productivity, rebalance the economy, reduce economic inequality and tackle big societal and environmental challenges – but this requires effective policies to support it and regulate it, based on the right data and metrics. As European Commissioner Carlos Moedas pointed out in his speech at the OECD Blue Sky conference: “Data is the fuel that  [innovation] policy runs on. Without it we cannot know if we are making the right decisions.

How do we move forward?

There are three broad fronts of action, related to the ideas of measuring, opening-up and integrating that I've set out above.

First, we need to extend the use of new data sources and data science methods to measure and map innovation. Like everyone else, innovators leave a digital footprint in the websites and services they use to raise finance, network, collaborate, recruit, market and sell.

We need to actively analyse those sources to measure innovation inputs, activities and outputs.  This is what we have started doing in projects such as Tech Nation, The Geography of Creativity or Arloesiadur, with promising results.  Now we need to start pushing these methods into the policy mainstream, and building trust around their use. The work we are doing with NIESR and other partners in the Economic Statistics Centre of Excellence, set up by ONS, has precisely this goal.

Another potential strategy to measure innovation better would be to ‘nowcast’ business innovation with web data. One could, for example, run a large survey to measure business innovation and then look for good predictors for those metrics in other, timelier matched data sources, such as the websites of the respondents (in the language of machine learning, we would be training a model in a dataset labelled via the survey). We could then use what we learn about the link between proxies and metrics to estimate the probability that other businesses we have not surveyed (but display innovation ‘signals’) are innovative, and to track changes in the situation faster and more frequently than is possible with big and expensive surveys.

Second, we need to find a way to open important innovation datasets that are currently closed. This applies to the public sector, who should, almost by default, open data about the companies that participate in innovation programmes (this is something that Innovate UK is already doing via Gateway to Research), as well as business registries and even, within reason, administrative datasets. Government interventions specifically designed to create high quality data, such as those advocated and run by the Innovation Growth Lab, are an important element of this mix.

On the private sector side, some platforms like GitHub, Meetup or Twitter are quite open with their data, which can be accessed through open Application Programming Interfaces (API). Others, like LinkedIn, whose data offers amazing opportunities to study labour flows, business networks and innovative capabilities remain closed.

Are there any regulations, incentives and technical systems (including data sharing through secure data services) that could be put in place to encourage a more systematic sharing of this information to inform innovation and economic policy?

Third, we need to integrate datasets to get a unified view of the innovation system. How can we measure innovation comprehensively if data about content innovations exist in Apple’s App Store, data about digital innovations in collaborative coding sites like GitHub, and data about organisational innovations in job reviews sites such as Glassdoor?

To merge all these datasets efficiently, we need unique identifiers telling us that innovative startup X in GitHub is the same one that participated in innovation programme Y and accelerator Z, along the lines of what already exists in science.

Is there more that governments and data platforms could do to encourage the uptake of business identifiers like GLEI? And what about unique identifiers for individual innovators, which might, for example, help us follow their trajectories to map the diffusion of ideas and measure the long term impact of innovation programmes? Here, one could start with public sector solutions, like the Personal Identification Number which has enabled so much great innovation research in Nordic countries, and private ones, like user ids in social media platforms such as LinkedIn or Twitter.

A coda and a caveat: Metrics are only the beginning

We need more and better data for innovation policy, and I believe that the agenda above, inspired by advances in scientometrics, would help. However, in order to have an impact, the new metrics that we develop need to be used, and used smartly. 

It is here where I should perhaps temper my optimism about the situation in the science domain, where scholars have long expressed concerns about the obsession with metrics, and the excessive and distortive influence of reductionist Impact Factors or university rankings. The complexity of the world will always overwhelm our ability to map it, and the uncertainty which is inherent to scientific, technological and entrepreneurial creativity will always thwart our desire to predict and control it.

Ignorance is not bliss though, and few would argue that the response to all these challenges is to stop measuring. The answer is to continue measuring, measuring better and measuring humbly, with an understanding of the limitations of metrics, learning by using and using what we learn.

These processes are at the heart of the scientific method, which can help us better understand and support science, and also innovation.

This blog received helpful comments from James Phipps.

The image that illustrates this blog is a cell structure seen through an early microscope in Robert Hooke’s Micrographia, an example of how better measurement can be used to reveal complexity rather than remove it.

Endnotes

[1] Scientometrics is a portmanteau of ‘science’ and ‘metrics’. It refers to the measurement of science, often using scientific and technological outputs such as publications or patents.

[2] Having said this, some people have matched micro-administrative data with innovation survey data and innovation programme data, to great effect.

[3] To be sure, other outputs such as books or artefacts are also important in Arts and Humanities disciplines.

 

Author

Juan Mateos-Garcia

Juan Mateos-Garcia

Juan Mateos-Garcia

Director of Data Analytics Practice

Juan Mateos-Garcia was the Director of Data Analytics at Nesta.

View profile