6: A framework for understanding how AI and CI interact
A framework for understanding how AI and CI interact
In spite of the rapidly growing field of projects introducing AI into CI initiatives, there are currently no frameworks for understanding the different forms of interaction between crowds and machines. This presents a challenge to understanding the field as a whole and how to support its future development.
Based on an analysis of case studies and emerging academic research, we have identified at least four ways we can begin to understand this relationship. Some projects are more advanced and include more than one type of interaction. Although these categories span different levels of maturity and will undoubtedly evolve as the field continues to grow, they provide a starting point for those interested in exploring the current AI-enabled CI landscape and future opportunities.
Machines working on data generated by people and sensors
In the first type of interaction, distributed networks of humans and/or sensors produce data that is used as continuous real-time input for machine-learning algorithms. Data is produced by individuals (or sensors) independently of one another, but the AI works on the aggregated collective data. In these cases, the human contributors interact with the AI passively.
A typical data source is user-generated content online, such as videos, photos and text on social media platforms. Dataminr is one example that uses passive user‑generated content scraped from the internet to monitor for unexpected events of public interest – such as environmental disasters or public health emergencies – in order to produce early warning alerts for officials who need to plan responses in real time.
Other examples use data collected from remote or on-the-ground sensors, which can vary from satellite data to geolocation data from mobile phones or specialised hardware that measures atmospheric conditions. The latter is used by OneSoil, a non-profit platform that provides real-time insights to help the agricultural community make decisions and plan for the future. OneSoil uses AI to analyse images from European Sentinel satellites and on-the-ground sensors in order to map field boundaries and estimate crop health on farming land.
People and machines taking turns to solve problems together
In this form of AI & CI interaction, a distributed network of people actively performs microtasks, while AI is used as an alternative source of intelligence. Participants typically carry out tasks independently from one another, which are aggregated to produce a collective output, such as a ‘wisdom of crowds’ prediction. In these situations, the AI and distributed crowds work on different parts of the problem. One example is the Early Warning Project, which uses both crowd forecasting and statistical modelling to generate predictions about the risk of mass atrocities worldwide. In combination, the methods offer complementary insights and counterbalance each other’s weaknesses.
Similarly, the Waze app, used by over 115 million drivers all over the world, draws on complementary aspects of human and machine intelligence. It uses AI to learn the day-to-day patterns of its users and map out potential travel routes, which is supplemented with real-time crowdsourced information by the app’s users. The hyperlocal and dynamically changing information about road construction projects, traffic conditions and even fuel prices supplied by users is combined with the AI-generated directions to suggest the optimised final route.
Another example is the iNaturalist app, which supports an online social network of nature enthusiasts to log sightings of different species. It uses a combination of computer vision, community feedback and individual expertise to classify observations and generate an open dataset on global biodiversity that is used in scientific research and conservation.
People and machines solving tasks together at the same time
Instead of taking turns, this form of collaboration happens in real time. AI forms part of the group that is working together on the same task. It places AI and people into a highly interdependent relationship and is reliant on trust and social acceptance of AI. Most existing work on this form has taken place in lab-based experiments or through gaming, such as the Project Malmo virtual environment, which has been built on top of the game Minecraft. The Malmo Collaborative AI Challenge was a competition where artificial agents played a collaborative game with other agents and humans to advance research into co‑operation between people and AI.
Swarm AI, an online platform developed by Unanimous AI, is a rare real-world example, where groups of people and AI agents work together as part of a closed-loop system to make consensus decisions and predictions in real time, on issues ranging from medical diagnosis to political preferences.
Autodesk’s generative design software for collaborative design is another example. In this case, the AI gives designers and other users iterative suggestions for possible permutations of a solution based on the parameters that it is given. Generative design has been credited with extending human creativity by moving beyond the boundaries of the solution spaces that designers typically explore. To date, generative design has mostly been used in industrial manufacturing and product design, but it is easy to imagine the technology transforming public spaces and urban planning based on real-time interaction with a larger community of individuals. A demonstration of the viability of this approach is the Autodesk office in Toronto, which was created using generative design based on parameters specified by 300 employees, as well as other factors.
Using machines to connect knowledge and tasks in groups
AI can also play a vital role in enabling more efficient and streamlined CI projects by helping people better navigate different kinds of information and tasks. In this type of interaction, AI is used for back-end functionality to enhance the individual capabilities of users to perform tasks or improve their experience. We see this type of AI contribution as ‘greasing the wheels’ of a CI process.
It can be achieved in many different ways, such as through better matching of individuals in a community who have common interests. For example, the Wefarm app is a peer-to-peer network of farmers who use it to ask for advice about issues they encounter from others in the community. It uses AI to analyse the requests posted by farmers and matches them to others who are the best qualified to answer. A different version of this interaction is offered by the Syrian Archive, which crowdsources footage of conflict from Syria and has implemented AI to optimise the experience of searching through this rich material. It is used by activists and non-profit organisations to gather evidence of human rights violations. Here, the AI is enabling better matching between people and information.
Other examples are focused on better matching tasks to individual users. We see this in the Gravity Spy project on the Zooniverse platform, which tailors the training process for each new user on the project, resulting in a better performance and an improved experience for the citizen scientist. A slightly different approach has been tried by the Space Warps project and also on the Zooniverse platform, which has started experimenting with intelligent task assignment for participants to optimise co-ordination between different skill levels.
 We found an initial longlist of over 150 CI projects using AI. Our final analysis used approximately 50 of these, which captured a range of CI methods across different sectors.