One core challenge when supporting innovation is allocating resources to the most promising ideas under conditions of great uncertainty. Unfortunately, the current systems used by academic and governmental funding bodies can sometimes inadvertently sideline the most novel thinking.

Within academia, for instance, peer-review processes are typically used to judge funding applications. Despite efforts to make the process as objective as possible, studies of academic peer-review show that experts tend to mark down researchers in fields distant from their own, as well as those who diverge too much from the dominant mindset. As Jenn Gustetic, Program Executive for Small Business Research at NASA, says: 'there are fundamental limits to peer-review that are well known, and that are weeding out novel ideas.’

Part of the problem is that natural human tendencies towards motivated reasoning, confirmation bias, and tribalism are not entirely eliminated by the current processes. As Professor Dorothy Bishop from the University of Oxford, a seasoned peer-review panelist, says: 'You just had to have one negative opinion and it would typically shut down the chances for that proposal'. The net effect is a tendency toward uncontroversial opinions – despite the ostensible purpose of academia to promote divergent thinking. Carol Dahl, Executive Director of the Lemelson Foundation, agrees: 'Peer-review tends to take you to the most orthodox ideas', she says.

Other long-established concerns include the 'Matthew effect', a term derived from the Gospel of St Matthew coined by sociologist Robert Merton in the 1960s to describe the phenomenon where eminent scientists are often given more credit than others for similar work, thus reinforcing their position.

Questions also remain about whether female academic researchers receive less funding than men. And although the ‘Haldane principle’ – the idea that decisions about UK research funding should rest with researchers rather than politicians – is supposed to reduce overt politicisation, there is also a growing body of research demonstrating the effects of political bias in the peer-review process.

As a result, many of those responsible for managing and supporting innovation are looking for new ways to make decision-making less biased.

Despite efforts to make the process as objective as possible, studies of academic peer-review show that experts tend to mark down researchers in fields distant from their own, as well as those who diverge too much from the dominant mindset.

How are efforts to reduce bias being used as an innovation method?

Randomised funding

One novel way of making funding decisions for the academic sector is ‘randomisation’, where resources are distributed (in part) by lottery. While this may, at first sight, seem to run counter to the quest for academic excellence, there are ways of introducing randomisation whilst retaining elements of quality control. For example, under one such system, proposals might be divided into three groups – a top category which are all funded, a bottom category which are never funded, and a middle category where funding is allocated at random.

The method removes some of the bias around funding because part of the decision-making process where differences between quality are marginal is decided by lottery rather than by people whose choices can be influenced by all sorts of prejudice. As researcher Shahar Avin describes, randomisation may be particularly helpful, ‘where the search space is large or where feedback about success is slow, such as in more blue skies activity'.

Already some organisations are experimenting with this approach. The New Zealand Health Research Council has used a random-number generator to help distribute some of its Explorer Grants that are intended to support more radical research. The organisation’s Chief Executive Kath McPherson said, 'We believe that random funding is a fair and transparent way to choose between equally qualified applicants.'

In a similar way, InnovateUK has used a lottery to distribute vouchers to help pay for expert advice (subject to checks on eligibility and scope). The Nigerian government previously ran a successful programme of randomised grants for entrepreneurs, called YouWIN! Nigeria. And the Volkswagen Foundation partially randomised funding of their 'Experiment!' grants that are intended to search out bold new scientific ideas.

Of course randomisation raises concerns. There are fears that it may encourage a ‘scattergun’ approach from those hoping to secure funding purely by luck. There are questions about how to set the minimum-quality thresholds for whether proposals are included in the lottery. There are also concerns from academics about moving the decision-making farther from the research community.[1]

Other questions arise regarding the usefulness of the approach in circumstances where the value of research can be very easily assessed, and whether the approach would be suitable for large research infrastructure projects, where the participants are limited and where decisions by lottery might not be politically acceptable.

Some of these concerns can be tackled with good design. Others may be more fundamental. Nevertheless, randomisation offers one potential way of reducing funding bias, especially in situations where the cost of evaluating proposals is large in relation to the overall grant being awarded – such as small grant schemes for early-stage researchers.

We believe that random funding is a fair and transparent way to choose between equally qualified applicants.

Kath McPherson, Chief Executive, The New Zealand Health Research Council

Egalitarian funding

Another method of decision-making that might help reduce bias within academia is ‘egalitarian funding’. Using this method, resources would be evenly divided among researchers rather than distributed through peer-review. Here, there would be no bias in direct decision making about funding because decisions would not be ‘made’ in the same sense as before.

When compared to the current system, it might be cheaper to administer, and also tackle concerns about reduced scientific impact per pound spent by holders of larger grants. Egalitarian funding could also reduce dropout rates of researchers who do not receive funding, workplace stress from intense competition, and incentives to commit scientific fraud.

One objection to egalitarian funding is that it would spread resources too thinly. But modelling has gone some way to alleviate such worries. According to research by Krist Vaesen and Joel Katzav, 'Researchers could, on average, maintain current PhD student and Postdoc employment levels, and still have at their disposal a moderate (the UK) to considerable (the Netherlands or the US) budget for travel and equipment'.

A more substantial challenge is finding the gatekeepers in an egalitarian funding system, as decisions would need to be made about who participates and between whom resources should be divided. If this role were centralised, then substantial power would be handed to a few people (which didn’t work out well in the Soviet Union where, for example, scientist Trofim Lysenko’s refusal to accept Mendelian genetics held back biological research). A decentralised system of gatekeeping, perhaps at the level of universities, might help mitigate this problem.

Another potential barrier is reduced accountability to taxpayers who ultimately fund much research yet would lack of control over its direction because choices could not be made about what research would be supported. Such a system might also reduce incentives for researchers to excel, since their performance would make no difference to their funding.

The future

The momentum to redesign funding decisions about research and innovation could be driven by many things. While funding for academic research has increased, the number of researchers has grown even more rapidly. This means more people are chasing each pound, and thus even high quality proposals can be rejected. Globalisation allows researchers to seek funding from many sources internationally, intensifying competition.

There is also a growing interest in transformational and interdisciplinary research that can be undervalued by traditional decision-making. The benefits of some of these new methods extend beyond reducing bias. Peer review takes a lot of time and resources. In 2005/6, reviewers for the UK research councils are estimated to have spent a combined 192 years appraising proposals.

At the heart of these ideas is the research community’s recognition that the way innovation is currently conducted is not always backed by evidence. A new field of 'research on research' is emerging, pioneered by organisations such as the Innovation Growth Lab, to better understand these methods of managing and supporting innovation.