How will the presence of ‘artificial agents’ affect group behaviour when people are confronted with collective risk problems, such as climate change?
The experiment aims to find out how people behave in collective risk situations (such as climate change, antibiotic resistance or space debris) when artificial agents are introduced into a group. Artificial agents are complex pieces of software that have the ability to act autonomously. They can direct and change activity to achieve specific goals, as well as perform tasks on behalf of a person or host.
The experiment will explore how the presence of these artificial agents influences the success and behaviour of groups in collective risk situations. It will also explore what happens if people can delegate their decisions to agents: what type of agents will they select, will the agents be trusted to act on their behalf, and will the collective be better off?
Many of the challenges we face, from inequality to biodiversity or air pollution require collective action. But people often fail to work together for the common good. Collective action can be made difficult by the problems of coordination; the costs to individuals of participating in collective action where, for example, they may have to give up power or change their lifestyle; and concerns that others will free-ride off their efforts. All of these factors can make successful collaboration hard to achieve and it contributes to the insufficient progress we see on many shared global issues.
This experiment will help us understand whether artificial agents can help improve coordination and fairness among group participants in collective risk scenarios and therefore increase the group’s collective intelligence to solve complex problems. The findings will be relevant for deliberative digital democracy platforms, social movements, policy makers, international agencies, and designers of hybrid machine-human systems.
Follow the AI Lab on Twitter to learn more about their work: @aibrussels