About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

When it comes to artificial intelligence (AI), the dominant media narratives often end up taking one of two opposing stances: AI is the saviour or the villain. Whether it is presented as the technology responsible for killer robots and mass job displacement or the one curing all disease and halting the climate crisis, it seems clear that AI will be a defining feature of our future society. However, these visions leave little room for nuance and informed public debate. They also help propel the typical trajectory followed by emerging technologies; with inevitable regularity we observe the ascent of new technologies to the peak of inflated expectations they will not be able to fulfil, before dooming them to a period languishing in the trough of disillusionment.[1]

There is an alternative vision for the future of AI development. By starting with people first, we can introduce new technologies into our lives in a more deliberate and less disruptive way. Clearly defining the problems we want to address and focusing on solutions that result in the most collective benefit can lead us towards a better relationship between machine and human intelligence. By considering AI in the context of large-scale participatory projects across areas such as citizen science, crowdsourcing and participatory digital democracy, we can both amplify what it is possible to achieve through collective effort and shape the future trajectory of machine intelligence. We call this 21st-century collective intelligence (CI).

In The Future of Minds and Machines we introduce an emerging framework for thinking about how groups of people interface with AI and map out the different ways that AI can add value to collective human intelligence and vice versa. The framework has, in large part, been developed through analysis of inspiring projects and organisations that are testing out opportunities for combining AI & CI in areas ranging from farming to monitoring human rights violations. Bringing together these two fields is not easy. The design tensions identified through our research highlight the challenges of navigating this opportunity and selecting the criteria that public sector decision-makers should consider in order to make the most of solving problems with both minds and machines.[2]

What is in this report?

Sections 1 to 3 provide an overview of AI, CI and how they can be brought together to solve problems. Sections 4 and 5 describe the challenges faced by CI and how AI methods can help. Sections 6 and 7 demonstrate how AI is already enhancing CI and helping it to scale, as well as how CI could help build better AI. Building on this, sections 8 and 9 highlight the design questions that need to be considered by anyone wanting to make use of these new innovation methods. The report concludes with a number of recommendations on how to support this new field for policy makers, and those involved in funding, researching and developing AI & CI solutions.[3]

Intended audience

This report is aimed at innovators working in public sector and civil society organisations who have some experience with participatory methods and want to understand the opportunities for combining machine and collective human intelligence to address social challenges. We hope that it can serve as inspiration for funders who care about determining a trajectory for AI that can bring the broadest possible societal benefit.

This report will also be relevant for technology and research communities with an interest in new opportunities for solving real-world problems, in dialogue with decision-makers and members of the public. Ultimately, we aim to stimulate more communication and collaboration between all of these groups.


We are very grateful for the valuable insight and feedback on this research from our colleagues at Nesta, Kathy Peach, Thea Snow, Zosia Poulter, Katja Bejo, Jen Rae, Jack Orlik, Harry Armstrong, Bea Karol Burks, Geoff Mulgan, Kostas Stathoulopoulos, Markus Droemann and Eva Grobbink, as well as the wider Explorations and Research, Analysis and Policy teams.

We would like to thank the LAMA Development and Cooperation Agency for the extensive background research and horizon scan they undertook, which underpin many of the lessons in this report. At LAMA, we would especially like to thank Elena Como, Eleonora Corsini, Stefania Galli, Dario Marmo, Walter Nunziati.

This report is based on the insights and experiences of a whole range of projects and researchers working at the forefront of exploring the AI & CI field through research and practice. We would like to thank the following people and organisations for taking part in the research through interviews and sharing feedback on early stages of the research.

Mollie Zapata, Daniel Perez Rada, Franco Pesce, Karel Verhaeghe, Erik Johnston, Andrew Bagdanov, Mauro Lombardi, Nathan Matias, Jeff Deutsch, Juliene Corbiese, Walter Lasecki, Hannah Wallach, Anna Noel-Storr, Stefana Broadbent, Dirk Helbing, Carlo Torniai, Mark Klein, Stefan Herzog, Nalia Murray, David Cabo, Vito Trianni, Harry Wilson, Sara Caldas, Louis Rosenberg, Colin Megill, Francis Heylingen, Elian Carsenat, Ashvin Sologar, Erin Rees, Stefano Merler and Zooniverse.

The visual assets for this report were created by Margherita Cardoso of Soapbox. Figures 2 and 3 were created by Lily Scowen for the Collective Intelligence Design Playbook.

[1] Terms ‘peak of inflated expectation’ and ‘trough of disillusionment’ taken from the Gartner Hype Cycle.

[2] The research has been informed by an analysis of 150 existing CI projects that make use of AI, desk research, literature review and interviews with experts working across both fields.

[3] The report touches on current debates in fairness and disparate impact but sets aside the detailed examination of AI Ethics, which is beyond scope. Instead, we recommend the publications being produced by the Berkman Klein Center, the Oxford Internet Institute, Data & Society, the AI Now Institute, the FAT ML (Fairness, Accountability, and Transparency in Machine Learning) community and countless others who give these issues the attention and critical discussion that they deserve.


Aleks Berditchevskaia

Aleks Berditchevskaia

Aleks Berditchevskaia

Principal Researcher, Centre for Collective Intelligence Design

Aleks Berditchevskaia is the Principal Researcher at Nesta’s Centre for Collective Intelligence Design.

View profile
Peter Baeck

Peter Baeck

Peter Baeck

Director of the Centre for Collective Intelligence Design

Peter leads work that explores how combining human and machine intelligence can develop innovative solutions to social challenges.

View profile