In September 2019, we launched the second round of the Collective Intelligence Grants in partnership with Wellcome Trust, Cloudera Foundation, and Omidyar Network. Together we created a £500,000 fund for experiments that could generate actionable insight on how to advance collective intelligence to solve social problems.

We're supporting 15 new experiments in machine-crowd cooperation to address complex challenges. Find out what they are and who is behind them.

Exploring AI-crowd interaction

At the Centre for Collective Intelligence Design, we believe that to tackle complex problems we need to mobilise all the resources of intelligence available to us. That’s why we are working to understand how to best combine the complementary strengths of machine intelligence and collective human intelligence.

Six of our grantees are experimenting with AI and crowd interaction:

Support of humanitarian mapping with open machine learning

Can machine learning increase the quality and speed of human mappers in areas under-represented on open source maps?

Who is behind this experiment?
Humanitarian OpenStreetMap Team (lead)
Netherlands Red Cross (partner)

What is the experiment?
This experiment will test whether using AI-generated map features can increase the quality and speed of mapping by humans on OSM. It will train and test a number of mapping algorithms using existing OpenStreetMap data and satellite imagery.

Why is it relevant?
As the situation in emergency contexts changes rapidly, there is a need for up-to-date data. The amount of existing data and the speed at which it shifts makes it difficult for humans alone to keep up with mapping data for humanitarian action. For machine learning models to help humans make sense of all this information, the training data for those models needs to be accurate, high quality, and available everywhere.

How might the findings help people better design collective intelligence?
The experiment will increase our understanding how crowds can most effectively and efficiently collaborate with machine learning models. This is relevant for anyone relying on up-to-date information, such as public health authorities or humanitarian organisations.

Localised insights for better humanitarian drought response

Can collective intelligence improve the decision-making that drives humanitarian response by involving affected communities and local actors in data analysis and interpretation, and by combining multiple data sets?

Who is behind this experiment?
International Organization for Migration

What is the experiment?
This experiment will test whether a collective intelligence platform can improve the effectiveness of humanitarian response outcomes. The platform will combine multiple data sets (e.g. on displacement, rainfall, vegetation) as a basis for an AI to elaborate potential response options to drought and resulting displacement. It will engage humanitarian actors and affected communities in data analysis to help improve decision-making in disaster situations and better target humanitarian response in disaster situations.

Why is it relevant?
Often it is not the absence of data that hampers the effective response to a humanitarian crisis, but the inability to process all available information in a timely and complementary manner. At the same time, while many actors contribute to collecting and disseminating information on the numbers, locations and needs of displacement affected populations in crises, the voices of those communities are rarely involved in the processes that lead to decisions directly impacting their lives. This experiment addresses both these challenges by putting affected communities at the centre of data analysis processes and combining human and machine intelligence to better target humanitarian response outcomes.

How might the findings help people better design collective intelligence?
This experiment will generate insights into how to design collective intelligence platforms to make better decisions in a more inclusive manner, particularly in situations where outcomes are time-critical. The findings could be relevant for anyone having to rapidly react to disasters or emergencies, including development agencies, humanitarian organisations, and local authorities. The experiment will also increase our understanding about how humans and machines can collaborate effectively.

Harnessing the wisdom of crowds for more accurate medical diagnostics

How can we increase the accuracy of collective diagnosis from medical professionals on the HumanDX platform?

Who is behind this experiment?
Istituto di Scienze e Tecnologie della Cognizione (ISTC-CNR) (lead)
Max Planck Institute for Human Development (partner)
The Human Diagnosis Project (Human Dx) (partner)

What is the experiment?
The experiment will explore whether AI tools known as knowledge graphs can help physicians on the HumanDx platform produce more accurate medical diagnoses. HumanDx is an online system combining the collective intelligence of medical professionals and trainees with machine learning to allow collaboration on any case, question, or other medical topic. This experiment will compare collective diagnoses aggregated through knowledge graphs to individual diagnoses and manually aggregated collective diagnoses to understand which method is most accurate and effective.


Why is it relevant?
Misdiagnosis is a huge problem in the medical field, leading to incorrect treatment, erosion of trust in the healthcare system, and even deaths. Online platforms such as HumanDx connecting patients and doctors globally open up collective intelligence approaches to diagnose diseases. On those platforms, doctors don’t just decide on whether a disease is present or absent but can freely add their diagnoses to the platform, which requires experts to analyse all the proposed diagnoses. This is time-consuming and complex, and often not feasible in practice. This experiment explores the knowledge graphs as a novel method to automate this process while ensuring the highest possible diagnostic accuracy.


How might the findings help people better design collective intelligence?
The experiment will generate new insights into how to correctly represent and aggregate the knowledge of each individual in a collective. It will also increase our understanding about the validity of the wisdom of the crowd effect beyond binary choices. The findings are not just relevant for the medical diagnostics field, but will be useful for any type of collective intelligence platform that deals with open-ended questions, such as in the field of digital democracy.

Engineered serendipity for innovation

Can a serendipity-inducing recommendation algorithm improve a group’s creative problem solving ability?

Who is behind this experiment?
neu (Augmented Thinking) (lead)
City, University of London (partner)

What is the experiment?
The experiment will test the capacity of the serendipity-inducing recommendation algorithm to increase creativity and innovation in tackling complex social problems.

In this experiment, different groups of innovators will try to find solutions to the issues of plastic pollution in South East Asia and the ageing population in the UK. Some groups will do this with the help of a crowd-powered search engine using autocompletion and collaborative filtering, others will have just a regular search engine, while some will be provided with a serendipity-inducing recommendation algorithm.

Why is it relevant?
“Design waste” - reinventing the wheel, starting from scratch, or focusing on the wrong problem - is still an issue for innovators seeking solutions to social problems. Serendipity - chance or accidental discovery - is an important aspect of creativity, but has not been studied in the domain of collective creative problem solving. This experiment will test whether a collective intelligence based serendipity algorithm can increase the chance of “happy accidents” and such inspire new solutions. It will test what kind of accuracy vs. unexpectedness (closely vs. far related) is helpful for what creative task.

How might the findings help people better design collective intelligence?
The findings of this experiment will contribute to understanding how serendipity can support creative problem-solving. The insights will also contribute to our understanding of how to improve recommendation algorithms.

Volunteers and AI working together to counter cyber violence

Can an AI-based detection system and humans work together to reduce the level of online harassment on Reddit?

Who is behind this experiment?
Samurai Labs

What is the experiment?
The experiment will explore whether a group of human volunteers from within the Reddit community can work together with an AI based detection and notification system to decrease the cyber-violence level on Reddit over a period of six months. It will also test the effectiveness of two different types of verbal interventions by volunteers - normative and empathy based actions.

Why is it relevant?
Online platforms are great tools for communities to discuss, share knowledge, or solve problems together. However, hate speech, online harassment, and cyber bullying are huge issues on those platforms. On most platforms human moderators identify posts containing cyber violence and take action through account suspensions or deletion of posts. This approach is not always very effective, as people can reopen new accounts, for example. In addition, the volume of content makes this a difficult and time-consuming task for human moderators. This experiment will send volunteers from within the community an alert from an AI-based detection system when violent language or bullying behaviour is identified. It will also encourage them to respond to “offenders” in person to try to change their behaviour.

How might the findings help people better design collective intelligence?
The findings of this experiment will be useful for anyone trying to improve algorithms to reduce cyber bullying on any online platform, and will generate new insights into how humans and algorithms can work in tandem effectively.

IntelSpot

Can citizens playing online games be as effective as physicians in training AI models for image diagnosis?

Who is behind this experiment?
SpotLab

What is the experiment?
This experiment will test whether AI models trained by the general public outperform medical specialists in analysing medical imagery for the diagnosis of global diseases such as malaria. The AI models are trained by volunteers performing medical image analysis while playing serious online games for global diseases diagnosis.

Why is it relevant?
Usually, medical diagnosis requires the time of an expert analyst at a certain place at a certain time. The cost and scarcity of medical specialists means that those suffering from diseases linked to poverty, such as malaria, parasite infections, or leishmaniasis, may not receive the right treatment fast enough. AI models trained by volunteers analysing digitised and anonymised biological samples could reduce the dependence on experts alone and make access to medical diagnosis for global health diseases immediate and affordable. This experiment will validate the accuracy of those models by testing it against the performance of medical specialists.

How might the findings help people better design collective intelligence?
The experiment will help us understand which visual tasks can be taught by a crowd to AI models. The findings will also help us advance our understanding about how to best combine crowd-sourced non-expert knowledge with the capabilities of machines.

Better collective decisions

As we become more polarised in our views, and challenged by the need to make rapid decisions on emerging issues, it can be hard to ensure that we listen to diverse perspectives. But research shows that embracing diversity of experience and opinions is key to better decisions and solving problems more effectively.

Two of our grantees are testing new methods for more inclusive collective decisions:

Crowd prediction with algorithmically moderated social networks

Will algorithmically moderating a social network to maintain diversity of opinion improve collective forecasting?

Who is behind this experiment?
Centre for Cognition, Computation, and Modelling (Birkbeck, University of London)

What is the experiment?
This experiment will deploy an algorithm to moderate deliberating groups online and find the optimal balance between individual competency and opinion diversity to increase collective accuracy.

Experiment participants will indicate the probability of ten real-world events. They will then view the forecasts of others and be invited to revise their own forecasts for the events. The algorithm influences the communication between individual group members (who all have some relevant knowledge about the events) based on how people adapt their answers to the questions asked.

Why is it relevant?
The accuracy of a decision taken by a group depends on individuals' competence, group size, and the diversity of opinion among group members. While communication between group members is important, any group making collective decisions can also be negatively influenced by social biases. For example, participants might make a wrong judgement about how competent another group member is, or simply follow the majority in the group. This can mean that the diversity of opinions is significantly reduced which decreases group accuracy. This experiment uses an algorithm to maximise opinion diversity while allowing group members to exchange knowledge, in order to produce the most accurate collective forecasts.

How might the findings help people better design collective intelligence?
The experiment will generate new insights into how to make groups wiser by adding to our understanding of how group interaction influences opinion diversity. It will also increase our understanding about how this balance influences the accuracy of collective decision processes. The findings will be relevant across domains where decisions need to be taken that have an objectively correct answer, for example in the field of medical diagnostics or intelligence analysis.

Enhancing deliberative democracy with robot swarms

Can a swarm of robots interacting with humans facilitate social interaction and help people reach informed consensus?

Who is behind this experiment?
University of Bristol

What is the experiment?
This experiment will test whether a swarm of 100 small robots can act as a decision-support system in large (human) group decision-making scenarios such as conferences. The experiment will explore whether the robots can communicate opinion diversity and help a crowd to reach inclusive and informed consensus. By visibly displaying a participant's opinion as a colour, the robots physically guide participants through a room to meet others with different opinions, and aggregate this information through communicating with the other robots.

Why is it relevant?
Deliberation and decision-making can benefit from open discussion and opinion diversity. But integrating diverse opinions and information is one of the biggest problems in collective intelligence. With decisions becoming more complex and involving more people, technology-driven decision-support systems have emerged as a tool to facilitate this process and to make it more inclusive. However, many of those tools involve software on computers, phones, or tablets that might limit the meaningful human interactions which are crucial for increasing people’s understanding of complex issues and building community relationships. This experiment explores a novel type of decision-support system, namely crowd-robot interaction, that maintains meaningful human interactions.

How might the findings help people better design collective intelligence?
This experiment will generate new insights into the design of better decision-support systems. These could be used for reaching consensus in the context of deliberative democracy, or deployed at social gatherings to encourage mixing. The findings on opinion dynamics could also result in recommendations about how to facilitate social gatherings, with or without robots,to combat issues like social anxiety in crowds, social exclusion, and solution-scoping for difficult problems.

Understanding the dynamics of collective behaviour

The importance of understanding collective behaviour in relation to disasters such as floods, and pandemics is obvious. It is also significant in tackling many of the complex challenges that we are grappling with in the 21st century, from rising obesity levels to living within planetary boundaries.

Three of our grantees will investigate how positive behaviour spreads and how collective behaviour change can be encouraged:

Collective intelligence for diabetes control

Can positive deviance and data-driven segmentation help patients with poor diabetes control learn from those ‘like them’ who are successfully managing their disease?

Who is behind this experiment?
Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni (IEIIT-CNR) (lead)
Queen’s University (partner)
Ryerson University (partner)

What is the experiment?
This experiment will test whether successful outliers (positive deviants) and patient segmentation can drive behaviour change among diabetes patients. It will divide diabetes patients based on whether they prefer lifestyle modifications or medication to treat their condition, to deliver tailored virtual peer-to-peer group workshops so patients can share and learn from the success of others in similar clusters. In addition, the experiment will investigate whether machine learning can help to extract the reasons for this anticipated behaviour change.

Why is it relevant?
In any community there are people whose uncommon but successful behaviors or strategies enable them to find better solutions to a problem than their peers, despite facing similar challenges and having no extra resources. While digital platforms exist where patients can exchange their experiences or coping strategies, the potential for peers to support one another remains an underused resource in medicine. Illnesses such as obesity and diabetes are a big problem in the Western world and new approaches to encourage behaviour change are urgently needed. By testing positive deviance and peer learning in a new context this experiment is offering a potentially powerful collective intelligence approach to tackling some of the biggest health challenges.

How might the findings help people better design collective intelligence?
The experiment will generate new insights into how to disseminate “good” condition management behaviour throughout patient populations. It will help us understand how segmenting patients by their behaviour rather than by their condition can improve the spread of positive behaviour. This will be relevant for medical professionals, technologists in the space of digital health, or policy makers.

Understanding collective behaviour during resource scarcity

How do groups make decisions to share limited resources among themselves, and how does behaviour spread across social groups?

Who is behind this experiment?
University of Nottingham
RMIT University
University of Tasmania

What is the experiment?
This experiment will explore group behaviour in the context of resource shocks. The team will test different levels of social connectivity and communication within and between overlapping social groups on an economics experiment platform, in order to improve coordination in response to collective challenges such as shared resource depletion.

Why is it relevant?
Scarcity of resources will become a significantly more pressing and more global problem in the future. Climate change will increase pressures on shared resources, particularly water. Managing such challenges requires people to cooperate in using limited resources responsibly and to coordinate in sharing information. Learning how behaviour spreads across overlapping social groups such as families, friendship circles, or work teams, and how this can be manipulated is crucial to understand how the spread of positive practice can be encouraged in the event of squeezed resources.

How might the findings help people better design collective intelligence?
The experiment will generate new insights into how human populations can achieve cooperation and coordination through their different social networks. The findings will be useful for policy makers, emergency response teams and other practitioners tasked with coordinating the distribution of limited resources.

Pollution Explorers Collective Action

Can a combination of collective environmental assessment and collective action sustain behaviour change among local citizens to decrease air pollution in Tower Hamlets?

Who is behind this experiment?
Umbrellium
Loop Labs
Tower Hamlets Council

What is the experiment?
This experiment will explore different collective approaches to encourage and sustain behaviour change of citizens to improve air quality in the London Borough of Tower Hamlets. The team will test whether collective environmental assessment and collective action will enable people to sustain behaviour change for actions that are known to reduce air pollution, even though the direct individual effects of these actions on air pollution might not be immediately noticeable. More specifically, the experiment aims to show that participants are more likely to sustain behaviour change perceived as more difficult (and in aggregate more impactful) compared to participants who are not collectively activated or connected.

Why is it relevant?
Cities everywhere are under pressure to find sustainable and innovative solutions to meet air quality targets, particularly in the context of constrained budgets for local governments. While improving air quality in cities requires effective policy interventions, bottom-up interventions are also important to get people actively and permanently tied into the aims of the legislation and ensure their agency. Increasing the awareness of pollution levels among citizens is crucial to encourage and sustain behaviour change. This experiment will test a novel approach to give citizens a greater impetus to act by connecting collective sensing with collective action.

How might the findings help people better design collective intelligence?
The findings from this experiment will help build understanding of the tactics that could encourage and sustain behaviour change among citizens. The experiment’s outcomes will be relevant for local authorities or any other organisations trying to formulate effective citizen participation strategies to work on a broad range of issues from managing city resources to tackling environmental issues.

Collective intelligence for better data

Around the world there has been a data revolution driven by advances in information technology. But for many developing countries and many complex issues there are still data gaps. Collective intelligence approaches that involve people in generating or classifying data can help create more localised and real-time information, address bias in existing data sets, and audit or monitor AI systems.

Four of our grantees will be exploring collective intelligence for better data:

Citizen Science Labs: A collective intelligence experiment on perceptions of emotion recognition technologies

Does taking part in a collective intelligence initiative change people’s minds about emotion recognition systems and the underlying social research?

Who is behind this experiment?
Dovetail Labs

What is the experiment?
This experiment will test whether participating in a citizen science project increases the understanding of potential biases and risks associated with emotion recognition (ER) systems, as well as of the social science research that underwrites such systems. The team will set up citizen science labs where participants will take part in different collective experiences and activities that will make them question the assumptions underlying ER systems. The citizen science labs will allow citizen scientists to generate novel, high dimensional measurement and rapid ethnographic data that interrogates the ‘common view’ of universal emotions.

Why is it relevant?
ER systems are a subset of facial recognition technology that automatically infers emotional states from facial expressions. ER is increasingly deployed in areas such as policing, psychiatric diagnosis, and education. ER systems are built on the assumption that human emotions are biologically fixed and universally expressed, despite recent research suggesting that cultural and individual variability in emotional expression and perception exists. As such, ER systems present multiple issues of potential bias as the underlying datasets lack cultural diversity and context, which is particularly dangerous in the face of their increased deployment. This experiment uses a collective intelligence approach to increase public literacy about the social science research that underwrites emotion recognition systems and to spark a more thoughtful public conversation about the potential societal impacts of current ER systems.

How might the findings help people better design collective intelligence?
The experiment will increase our understanding of the effectiveness of participatory processes in relation to collective learning and critical thinking. The findings will be relevant to anyone planning to use collective intelligence to approach complex issues and to amplify the voice of diverse communities in the development of ER.

Real-time cholera monitoring

Can enhanced monitoring through real-time data help reduce cholera outbreaks?

Who is behind this experiment?
Drones for Humanity (Kenya Flying Labs) (lead)
Kenya Red Cross Society (partner)

What is the experiment?
The experiment will test whether a public health surveillance system combining crowdsourced local knowledge, aerial imagery and machine learning can improve understanding and reduction of cholera outbreaks in Kenya.

Why is it relevant?
While more and more data is being generated continuously, information and insights are not evenly spread, and data gaps are increasing. In crisis situations such as health emergencies, reliable and up-to-date data is essential to ensure prevention measures and responses by authorities are effective and timely. Often the areas that are most at risk, like informal settlements in Africa or South Asia, lack this information, leading to a fast spread of diseases and tens thousands of deaths per year. This experiment aims to address this data gap by combining meteorological and geographical data, crowdsourced local knowledge and aerial images taken by drones. The team will develop a predictive analytics tool based on this data with the aim of increasing the efficiency of interventions by authorities and reducing cholera cases as a result.

How might the findings help people better design collective intelligence?
The experiment will provide new insights into how collective intelligence can help to reduce data gaps and improve the quality of existing data. This is relevant for anyone relying on up-to-date information, such as public health authorities or humanitarian organisations.

The findings will also help us advance our understanding about how to best combine crowdsourced local knowledge with the capabilities of machines.

Smarter matchmaking for patient-led research

Can social matchmaking on a citizen science platform lower the barriers to patient-led research?

Who is behind this experiment?
Just One Giant Lab
Open Humans Foundation

What is the experiment?
This experiment will test whether efficient matchmaking between skills and needs among patients on a collective intelligence platform will increase engagement in patient-led research projects. Patients will receive targeted notifications - including the required skill sets, location, and topic - of a patient-led research project via the Open Humans platform. This explores how patient engagement preferences are driven by these different factors.

Why is it relevant?
Patient-led research has huge potential in health research. Patients, especially with rare or chronic diseases, have unique knowledge about managing their disease and how to improve their quality of life. Online patient networks for sharing experience and knowledge exist, but not all knowledge and skills are openly declared, so more coordination among patients is required to tap into this tacit knowledge and to ensure patient-led research projects are successful.

How might the findings help people better design collective intelligence?
The findings of this experiment will help improve matching algorithms to increase engagement and self-organisation on collective intelligence platforms. This is relevant not just for patient communities, but also for citizen science and any collective intelligence approach that involves crowdsourcing or matching individuals to specific tasks or projects.

Crowdsourcing weather reports in remote Bolivian villages

Can the crowdsourcing of climate data by farmers create a more effective meteorological alert system to help Bolivian farmers better adapt to extreme weather events?

Who is behind this experiment?
Swisscontact (lead)
Banco de Desarrollo Productivo (partner)
Latin American Centre for Rural Development (partner)

What is the experiment?
Will combining official meteorological data and crowdsourced local weather reports lead to more accurate local weather forecasts? And will it encourage farmers in rural Bolivia to change their adaptation strategies to reduce crop loss? Farmers will use low-cost sensors to collect hyperlocal weather and climate data, which will allow them to better prepare for extreme temperatures. The experiment will also test whether active involvement in weather forecasting increases the likelihood of acting upon those predictions.

Why is it relevant?
Accurate meteorological information is critical for farmers globally, but people in poor or remote communities often don’t have access to information about their microclimatic conditions. This data gap means that farmers are not always able to adapt to extreme weather events such as drought or freezing which can lead to crop loss and a worsening of their socio-economic situation.

How might the findings help people better design collective intelligence?
The experiment will provide new insights into how collective intelligence can help to reduce information gaps and improve the accuracy of data by combining novel data sources with existing, more traditional ones. The findings of this experiment are relevant for anyone relying on up-to-date or hyperlocal information, for example in disaster prevention, climate change adaptation, or emergency response.

A look at some of the CI grantees' work:

Did you know we had a first round of 12 grantees who we supported between March 2019 and January 2020?

Find out what they learned through their experiments here.

Author

Kathy Peach

Kathy Peach

Kathy Peach

Director of the Centre for Collective Intelligence Design

The Centre for Collective Intelligence Design explores how human and machine intelligence can be combined to develop innovative solutions to social challenges

View profile
Eva Grobbink

Eva Grobbink

Eva Grobbink

Researcher, Centre for Collective Intelligence Design

Eva was a Researcher working in the Explorations team on the Centre for Collective Intelligence Design.

View profile