Can AI mediate collective decision-making to remove people’s biases?
The experiment will test whether artificial intelligence can help groups of people make smarter decisions together. The experiment will test the effects of introducing multiple 'intelligent agents' into groups. Will these agents be able to minimise the influence of social biases (for example, when groups 'jump on the bandwagon')? And will these hybrid human and machine 'multi-agent systems' (MAS) lead to more accurate collective decision-making?
From healthcare to democracy, platforms are being created to harness the collective intelligence of crowds. In medical diagnostics, for example, online platforms have emerged that connect patients to a network of doctors worldwide. This has great promise for opening up healthcare globally. But any group making collective decisions can be negatively influenced by social biases. For example, participants might make a wrong judgement about how competent another group member is, or be overly-confident of their own opinions. This can lead to important information being overlooked, and less accurate decisions being made.
The experiment will generate new insights into how individuals update information and make decisions under the influence of other people and artificial agents. The findings will be relevant across domains for the design of collective intelligence platforms and other technologies that might support our decision-making, such as assistant devices in care, social robots, or autonomous vehicles.
Follow the project lead on Twitter to stay up to date on the experiment: @vitotrianni