About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Experiments in combining human and machine intelligence for social good

In September 2018 we launched the first wave of Collective Intelligence Grants, offering up to £20,000 to organisations with ideas for experiments that could advance knowledge about how to best design and apply collective intelligence to solve social problems.

We were excited to receive over two hundred initial applications and see the rapid growth of the field, as well as the quality and seriousness of the ideas. Selecting just twelve of these was a difficult task, and we are grateful to our external advisers and colleagues who contributed advice.

We are delighted to announce these first twelve experiments. By combining human intelligence and technology they aim to find new ways of solving divisive problems, enable us to see the world through others’ eyes, and help us to become smarter together.

Our hope is that these experiments may prompt much bigger funders to do more. There is currently a vast imbalance between the seemingly limitless funding for Artificial Intelligence (AI) and the resources dedicated to collective intelligence. Yet the stakes could not be higher. Progressing collective intelligence is in many ways humanity’s grandest challenge since there’s little prospect of solving climate change, epidemics or conflict without progress in how we think and act together.

Making better decisions together

Many of the complex challenges we face, from ageing populations to air pollution, need collective agreement and action to solve. They may need individuals to change behaviour, and give up resources or convenience to benefit future people they may never meet. These trade-offs are hard enough, but as societies become increasingly polarised, finding common ground becomes both more difficult and critical.

Four of our experiments are exploring different ways technology can help us overcome barriers to collective decision-making and become more responsible for the future.

AI Lab is introducing AI agents into group discussions on collective risks such as climate change. Will people delegate responsibility to artificial autonomous agents, and will this help the group make more responsible, long-term decisions?

FanSHEN is using digital storytelling to help school children understand how the world looks different to others. Will this perspective-taking lead to greater empathy, and will improving metacognition skills help these groups become more collectively intelligent as a result?

ISTC is creating hybrid human and artificial agent systems. Can the artificial agents mediate group discussions? Can they make us more conscious of how our own biases influence whose views we listen to and why? Will this change the result of group decisions?

Unanimous AI is exploring whether its algorithms that mimic the ‘swarm’ behaviour of bees and fish can help groups resolve politically divisive issues more satisfactorily than traditional voting methods.

Making better use of crowd wisdom

Crowd insights and data harnessed through collective intelligence has led to breakthroughs in traditionally elite professional fields such as science research and healthcare. It has changed the way laws get made, and enabled us to improve our understanding of situations in real-time. But as the volume of data increases so do the challenges of navigating and analysing it.

Four of our experiments will use machine learning to unlock new insights in large volumes of citizen-generated data. They will also explore how this changes its uptake and use to address social problems.

The Alan Turing Institute is using machine learning and natural language processing to help like-minded citizens find each other and similar proposals more easily on the Consul digital democracy platform. Will this make it easier for people with shared interests to work together and get their ideas heard by policy-makers?

CitizenLab aims to turn the thousands of insights generated by citizens on the platform into digestible policy recommendations for city officials. Will this increase the response times and uptake of citizen ideas by cities?

Swansea University will use machine learning to classify and organise crowdsourced footage of drone strikes in Yemen. Could this help human rights investigators use this data to build cases against perpetrators, and will it enable courts to accept crowdsourced data as evidence?

Huridocs will also test the potential for human rights organisations to interact with machine-generated intelligence in their work. Can semantic search be used to better understand patterns and trends in digital rights restrictions across countries in the Arab League?

Sustaining and improving contributions from the crowd

Anyone who has ever run a project relying on crowd contributions will understand the difficulties of sustaining engagement. Two of our experiments will explore this critical aspect of collective intelligence design as well as testing strategies to improve the quality of crowd participation.

Edinburgh University will test whether personalised recommendations increase the engagement, retention and quality of contributions that people make to citizen science projects on the SciStarter platform. We’re all used to recommendation algorithms suggesting films or products to us, but can they also be used to get us more involved in citizen science?

Southampton University will test different strategies for keeping people engaged in crowdsourced analysis of drone footage for humanitarian relief efforts (for example, to identify where assistance is needed). As media attention fades so can volunteer engagement. Can we find how to keep people contributing and taking on more difficult tasks for development efforts over the longer-term?

New applications of collective intelligence

At the Centre for Collective Intelligence Design, we’ve been studying how collective intelligence is helping to address environmental issues, improve health and deliver better democracy. We believe collective intelligence has the potential to help solve many issues - including some of the most pressing ones we face today.

Two of our experiments will test new use-cases for collective intelligence - in education and tackling urban food waste.

The Behavioural Insights Team will be using data from online maths assessment platform Eedi to see if collective intelligence can uncover new insight into effective teacher feedback. Despite being a key part of teachers’ jobs, little evidence exists on how to give feedback that increases student performance and learning. Can this data help identify the best feedback practice and will it show how teachers can be encouraged to adopt it?

The importance of reducing food waste is well known, but doing it is much harder. Hong Kong Baptist University will test whether leftover bread can be more effectively re-distributed around a city by volunteers using a collective intelligence platform to self-organise. If successful, other potential use-cases might include the movement of aid around a city after emergencies.

Over the next ten months our grantees will be running their experiments, and we’ll be reporting on their findings in Spring 2020. Find out more about the experiments, and follow their progress at #CollectiveIntelligence #Experiments.

Our second round of grants will open this summer. We hope these experiments will inspire others with great ideas to apply.

Author

Kathy Peach

Kathy Peach

Kathy Peach

Director of the Centre for Collective Intelligence Design

The Centre for Collective Intelligence Design explores how human and machine intelligence can be combined to develop innovative solutions to social challenges

View profile
Eva Grobbink

Eva Grobbink

Eva Grobbink

Researcher, Centre for Collective Intelligence Design

Eva was a Researcher working in the Explorations team on the Centre for Collective Intelligence Design.

View profile