When is a crowd wise?
The challenge and promise of platforms linking problems and solutions.
When is a crowd wise?
Every few weeks, and sometimes every few days, I get approached by a team wanting support for a platform to connect innovators and potential users of their innovations. Quite a few platforms of this kind already exist.
Some emphasise problems and challenges and promise to find solutions, such as Innocentive which allows organisations to crowdsource solutions from invited audiences or to open their challenges to its problem-solving network, or Nesta’s Challenge Prize Centre which opens challenges, from antimicrobial resistance to assistive technology, up to anyone.
Others emphasise gathering and then spreading proven solutions, like the Global Innovation Exchange which is a global online marketplace for innovations, funding, insights, resources and conversations, and enables collaboration for better addressing humanity’s greatest challenges.
Some promise to pool together lots of people’s brains to solve difficult problems (for example, MindHive which crowdsources ideas for use in policy and strategy solutions within a network of expert contributors from universities, industry, NGOs and government, or Open Ideo which is an open innovation platform enabling groups to work together on issues over a three- to five-month collaborative process. Quirky is a much more general platform for invention and creativity which has worked well in accumulating numbers, and promoting inventors, but wouldn’t claim to address more complex problems.
Yet another group of existing and new platforms try to combine several of these. Sphaera aims to help people looking for solutions to collaborate, and adapt existing solutions, along with a matchmaking service. A startup called AHHHA tries to help innovators develop new commercial products and services. Solverboard aims to provide a space where businesses could post challenges, and then filter and assess the provided solutions and reward the best idea. NineSigma connects organisations - including GSK, Philips and Unilever - with innovators.
There’s no doubt about the need. Making the right connections can create huge value. The world is full of problems in need of solutions, and solutions in need of problems. The designers of new platforms want to make the world a smarter and better place, and also quite like the idea of being the ‘go to’ platform for innovation.
But how close is anyone to delivering on their promise? At Nesta we have used quite a few of these platforms. We’ve supported some (like Leading Edge Only which links big corporates with inventors), and our challenges.org site provides a comprehensive tool that’s grown out of the challenge prize team's work.
All the examples listed above are impressive in their own ways, and I don’t doubt that before long some very powerful tools for matching solutions and problems, innovators and organisations wanting answers, will emerge. But I've ended up fairly sceptical about many of the ones I see. The platform aspect often turns out to be useful - but not nearly as decisive as many predicted a few years ago, mainly because the matching process is far more subtle than today’s platform technologies can handle. Instead it’s the combination of the online and offline processes that often turns out to be most important, but is missing from most of the proposals I see.
There’s also a big difference between problems that are sharply defined, and for which there are ready-made solutions; problems that are well-defined but no-one has yet solved; problems that are by their nature insoluble (the best we can hope for is amelioration); problems that are intimately interconnected; problems that are owned by an organisation; problems that are shared by a community… and so on. Much of the language of problems and solutions too easily extrapolates from fields like maths and engineering where there are neat solutions to clear problems. But, as I’ve written before, most of the really difficult challenges, like climate change, ageing, unemployment or happiness, are very different in nature and require lots of complementary responses that even in the best scenarios are unlikely to add up to a ‘solution’ (I will explore these questions in much more detail in a forthcoming book).
What works for a platform, and what counts as success, will depend a lot on where they sit in this landscape. Yet too many of the proposals haven’t fully thought this through, or indeed learned enough from others grappling with similar issues. So I usually ask anyone proposing a new platform a series of questions:
How will they get to a sufficient volume of both supply and demand to be useful and useable?
This, of course, is the challenge, or catch-22, of all platforms. They rely on network effects, but can only attract lots of users if they already have lots of users. The answer is usually to create very low barriers to joining the platform, and then to be demand driven. But only a tiny fraction of platforms achieve the volumes needed to be successful.
How will they get to sufficiently well-formulated problems and capabilities and avoid trying to link fuzzily defined problems and capabilities?
Our experience is that getting this right tends to be quite hard work, and requires a lot of specialist knowledge. I haven’t seen any platforms that can automate this. So most platforms risk being too full of ill-defined problems that will never find workable solutions. Given that this process can’t be automated, it has to be paid for somehow.
If they’re aiming to be a repository of good ideas, how will they avoid the risk of providing supply without demand?
We’ve done some repositories of our own, like living maps of job innovators, though on a shoestring compared to others. Any field can benefit from being aware of promising innovations that might be worth adopting or adapting. But most of the repositories of ideas struggle to find users. There is a lot less demand for good ideas than one might hope, and most people are far more persuaded by a compelling presentation at a conference, or a site visit, than they are by an online repository. I wish this wasn’t true. But it is. The recent push to create repositories of innovations in fields like development will probably repeat the pattern.
If they’re aiming to be a platform that will only promote proven innovations how will they decide what ‘proven’ means?
Deciding what really is proven, and what really does work, isn’t straightforward. We've got quite a few answers to this, based on our standards of evidence framework, and appreciate there are plenty of alternatives. Our standards of evidence framework operate on five levels, from level 1 (where there is a clear logic model), through level 2 (where data has been collected) to level 3 (where the intervention has been tested with a control group), to level 4 (where this has been replicated) and level 5 where it has been systematised. But surprisingly few of the proposals I see have given much thought to what counts as proven, let alone to the deeper question not just of what works, but also of what works where, when, for who, and with who.
How will they get the right community of users and contributors?
It’s likely to be much easier to link problems and solutions within a defined field, like assistive technology or software, where there is an existing community of practice, than where there isn’t. Hundreds of attempts have been made to create platforms that allow people to exchange or share anything and everything, and these nearly always fail. The successful examples tend to be more selective. Innocentive has a user group consisting of mainly technicians, students and engineers, with the majority holding a PhD. Our Open Data Challenges focused on programmers and entrepreneurs in data. MindHive users consist of a network of expert contributors from universities, industry, NGOs and government.
Of course the risk of specialisation is that you miss out on the serendipitous connections that can be made across boundaries. But making the most of these turns out to require a lot of very active, and often expensive, facilitation of the kind that OpenIdeo do well, or the kind of stage-gating built into models like the open data challenges.
What model of motivation lies behind the site and how well evidenced is it?
Many proposals describe sites that can pull in purely voluntary labour, without being clear why people will give, and at scale. There are many examples of people sharing time and ideas for free, so long as there is some recognition and a sense that the cause is worthwhile. But that obviously depends on them having confidence in the organisation using the ideas - which may be easier if it’s a charity. Many private companies have interpreted open innovation as meaning that they can get free value from customers or others. But people aren’t stupid, and look with suspicion at any process that involves them giving something for free to someone else who will make a profit from it. Models which are explicit about either the cash rewards or formal recognition - like Innocentive or Nesta’s own Challenge Prize Centre - seem to work better.
What have they learned from parallel attempts, past and present?
Nearly all the proposals I see present themselves as a uniquely brilliant device to use digital technology to solve the world’s problems. I’m still waiting for the first proposal that offers a serious account of lessons learned, and how they will avoid the mistakes others have made.
This is a fairly new field. It’s hardly surprising that the models aren’t yet mature, and don’t have much hard evidence. Rival platforms push their own solutions, and often make claims that don’t stand up to scrutiny (not least on the value of solutions they’ve contributed to). That’s true whether they’re very expensive or very cheap (and the costs and prices vary hugely).
In the prizes field we’ve tried both to be a player and to help the field to grow, highlighting and promoting models other than our own, and trying to think more rigorously about which tools work for which purposes. In a modest way we’d like to do the same for innovation platforms more generally, gathering together the ‘craft knowledge’ about which tools work for which purposes, what’s been learned from successes as well as failures, and pointing to how the field can become better at what it does.
This remains a field of huge promise. But the claim that crowds are always wise has turned out to be only half right. We need to get better at knowing how to help crowds make the most of their wisdom. That will require a bit more humility, honesty and willingness to learn.