Reducing social bias in human groups
In the field of medical diagnostics, new online platforms connect patients to a network of doctors worldwide. These platforms have great promise for opening up healthcare globally and tapping into the wisdom of medical professionals. But any group making collective decisions can be negatively influenced by social biases, such as overconfidence or ‘herding’ of participants. These tendencies can lead to important information being overlooked and less accurate decisions being made.
The Institute for Science and Technology at the Italian Research Council has experimented with multi-agent systems (MAS) as mediators of social information. On a platform, users try to reach a consensus about the best (i.e. objectively correct) choice among several given options. In different treatments, users act either alone, can see the choices of others and how various options are rated, or interact with the MAS. The MAS consists of autonomous software agents collecting information about how users on the platform interact with each other. Based on this information, the MAS adjusts the group’s decision to guarantee that a consensus is eventually achieved.
The experiment explores whether interactions mediated by a MAS reduce the effects of social biases and therefore increase decision accuracy. It is an example of how humans and an AI system take turns to improve decision-making through continuous feedback loops. If the experiment proves successful, the findings will be useful in the design of decision-making support systems, such as assistant devices in care, social robots or autonomous vehicles.