Hybrid intelligent solutions for collective risk situations

www.nesta.org.uk/feature/collective-intelligence-grants/hybrid-intelligent-solutions-collective-risk-situations/
Skip to content

Who was behind this experiment?

What was the experiment?

The experiment tested whether using AI in the form of artificial autonomous agents improved cooperation and decision-making in collective risk situations (such as climate change or a global pandemic). Autonomous agents are complex pieces of software that have the ability to act without direct human input. They can direct and change activity to achieve specific goals, as well as perform tasks on behalf of a person or host. The experiment set out to understand how people would respond to the presence of such autonomous agents and whether it would increase the collective success of the group.

What did they find?

The experiment found that groups were more successful when people delegated responsibility to an artificial autonomous agent to make decisions on their behalf. This was because people picked autonomous agents programmed to act in the interests of the collective, rather those programmed to maximise benefit to the individual. Could it be that delegating to an AI encouraged people to think more long-term, or reduced their fear of being cheated by other participants?

Why is it relevant?

Many of the challenges we face require collective action. But people often fail to work together for the common good. Collective action can be made difficult by the problems of coordination; the costs to individuals of participating in collective action where, for example, they may have to give up power or change their lifestyle; and concerns that others will free-ride off their efforts. All of these factors can make successful collaboration hard to achieve and it contributes to the insufficient progress we see on many shared global issues.

Follow the AI Lab on Twitter to learn more about their work: @aibrussels