About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Experiments in collective intelligence design 2.0: Collective Intelligence Grants Programme

In September 2019, we launched the second round of the Collective Intelligence Grants in partnership with the Wellcome Trust, the Patrick J. McGovern Foundation and Omidyar Network. Together we created a £500,000 fund and supported 15 diverse experiments to advance our understanding of how machine-crowd cooperation can be used to address complex social issues.

This report describes the experiments we funded, highlights the main findings and outlines their relevance for the field of collective intelligence. From using a serious game played by adults and children to diagnose neglected global diseases (and train an AI model to do the same), to testing whether robot swarms can help groups make better decisions, the experiments demonstrate how crowd-machine collaboration can help us face some of the most important challenges we face in society.

This report is primarily aimed at practitioners and innovators who want to apply collective intelligence to address social challenges. We hope, however, that the insights will also inspire more funders, decision-makers and academics to take this research further.

Based on Nesta’s experience of supporting this cohort of grantees, we outline six key priorities for future research and experimentation.

1. Establishing the right partnerships to do collective intelligence well.

Partnerships are important for ensuring that collective intelligence projects have the diverse technical and people skills needed to be a success. And they are essential for testing and scaling cutting edge CI methods developed in experimental lab-based environments in a real-world context. There is an important role for Nesta and funders to play in facilitating and enabling successful collaborations.

2. Researching how to effectively recruit participants and sustain engagement.

Recruitment and engagement was a challenge for many of the grantees. Some experimented with different marketing approaches or incentives, but there is a significant gap in understanding how to effectively tap into public motivations for participating. Given the centrality of public participation to collective intelligence initiatives, this is an area in need of further research.

3. Funding innovation in tools for collective decision making.

We have seen growing numbers of grant proposals that seek to use a crowd for gathering or classifying data to train machine learning models. Although these experiments are important, we believe there is a critical innovation gap in digital tools for collective decision making.

The grantees in this cohort who tested tools to improve collective decisions explored how altering network connectivity and communication influences group behaviour, and how manipulating connectivity can improve crowd forecasting. More research is needed to optimise these techniques and tools, and test them in different contexts and with different audiences.

4. Understand how best to integrate collective intelligence tools into established workflows.

A number of our experiments highlighted that integrating new tools and AI into established workflows can be disruptive for established practices and workflows, rendering them less helpful than anticipated. To maximise the potential of new tools, user research is needed to support integration, and training will be required to empower users with the skills and knowledge necessary to make use of tools.

5. Developing cooperative human-machine systems.

A number of recent studies have shown that AI-only teams often outperform human-AI teams. And this appeared to be the case for some of the experiments conducted in this programme. However, the widespread and well-known risks of AI highlight the importance of developing cooperative human-machine systems. This will require innovation in cooperative AI systems that matches or overtakes the interest in adversarial AI, and in particular, giving more attention to AI and human cooperation in the context of group problem-solving.

6. Designing and testing systems that enable positive collective behaviours.

Future research in this field should continue to explore systems and tools that can support positive collective behaviours. Overcoming the societal challenges of our time, from climate change to future pandemics, will undoubtedly require tools that can support changes in collective behaviour and collective social action.


Kathy Peach

Kathy Peach

Kathy Peach

Director of the Centre for Collective Intelligence Design

The Centre for Collective Intelligence Design explores how human and machine intelligence can be combined to develop innovative solutions to social challenges

View profile

Ian Steadman

Issy Gill

Issy Gill

Issy Gill

Senior Researcher, Centre for Collective Intelligence Design

Issy supported the Centre’s growing portfolio of collective intelligence research projects and managed the Collective Intelligence Grants Programme.

View profile