Our collective intelligence grants programme, now in the second round, examines how we can best design machine-human co-operation to help solve social problems. With our event, Testing the frontier of collective intelligence, we wanted to give the first cohort of collective intelligence grantees a space to present their experiment results to an audience of practitioners, academics and the funder community.

Like almost everyone, we were forced by the current situation to move this long-planned event to the virtual space. We decided to turn this challenge into an opportunity to try something new – a different type of event on a platform called remesh.ai, which we had planned to explore for a while. Remesh allows you to post content, such as videos or text, and you can ask your participants open-ended questions, but people can’t interact with each other. Unlike Zoom or Skype, it doesn’t use live video or audio. At the end of the session, many of you wanted to know more about the experiments that were presented in the videos. Here are some of your questions answered by our grantees.

Background: Unanimous AI, a technology company based in San Francisco, tested whether algorithms modelled on ‘swarm’ behaviour in bees and fish can enable groups with conflicting political views to find collectively acceptable solutions.

“The applications for the technology Swarm AI are broad. Swarm AI amplifies the intelligence of groups, enabling them to reach answers more quickly, with higher degrees of accuracy and satisfaction. In addition, participants have the ability to analyse the group deliberation process and draw insights into how the decision is made (e.g., what types of participants supported which choices, when did they change their mind, what was their next choice, etc.). Swarm is being applied in market research to uncover trends and get rapid insights from target customers. Consultants utilise Swarm for similar purposes but typically with groups of their clients' employees (in many cases their main interest is simply getting executives to reach any kind of consensus). Enterprise teams can use Swarm for their group decision-making (e.g., which candidate should we hire; how do we prioritise these issues), as well. We have increasing interest from academia to use Swarm with University classes and research programmes.”

David Baltaxe, Unanimous AI

@unanimousai

Background: ISTC-CNR, the Italian National Research Council’s Institute of Cognitive Sciences and Technologies, tested whether artificial agents can mitigate the bias of social influence in collective decision-making.

“Our experiment to test whether an AI can reduce social bias in group decision making by mediating conversations was completely anonymous. Participants did not know the social identity of their peers. Participants only received information about the others' choices in an aggregated form, which meant people couldn’t recognise each other’s individual identities. In spite of that, we observed biased decisions linked to the feedback people received from other participants.”

Vito Trianni, ISTC-CNR

@vitotrianni

Background: Swansea University tested whether using machine learning to classify and organise crowdsourced footage of airstrike images increases the uptake of such open-source digital evidence by legal practitioners in court.

“In August 2019, our project partners GLAN Law, in partnership with Mwatana, a Yemeni human rights organisation, submitted evidence to the UK government detailing airstrikes in Yemen, based on a combination of open source and eyewitness evidence. In targeting civilians the airstrikes appear to violate international humanitarian law. We argued that arms sales should be halted as a result. What happens next really depends on how the government responds - clearly other issues have taken priority at the moment. But if no satisfactory action is taken, we will pursue this through the courts.”

Yvonne McDermott Rees, University of Swansea

@ProfYvo

Background: Swansea University tested whether using machine learning to classify and organise crowdsourced footage of airstrike images increases the uptake of such open-source digital evidence by legal practitioners in court.

“Great question! Deep fakes are a huge issue in this field. Powerful perpetrators have harnessed the ‘fake news’ narrative to dismiss evidence of human rights violations (a good example is the Cameroonian Minister for Education’s initial response). The existence of deep fakes allow them to challenge even what seems like the most damning evidence, and to sow the seed in people’s minds that, really, you cannot trust anything you see online. Because of deep fakes, and the massive reputational risk in relying on something that later turns out to be doctored, human rights investigators have to be careful: undertaking rigorous verification and corroboration work. In some ways, they are in a better position than journalists, in that the immediacy factor is missing – they can take a good deal more time to verify and check their sources. But for many investigators that we spoke to in our research, the costs of doing this (time, resources, training) outweigh the benefits of having this evidence in the first place. There are some tools out there to help spot deep fakes, but as deepfake technology gets more developed, the tricks that these tools use to spot them (e.g. reverse image searching a portion of the scene) will become outdated and of less utility.”

Yvonne McDermott Rees, University of Swansea

@ProfYvo

Background: Fast Familiar investigated the importance of empathy and metacognition skills for group decision-making, and will test how such social skills can be fostered through immersive storytelling.

“It’s clear that our approach has potential and that young people really enjoy playing it, so we’re very keen for it to have a future life. Part of figuring out the future delivery model is working out how to ’scaffold’ the experiment with activities that young people can do before and after to ensure that they’re getting the most out of it, and also to deal with any questions that arise for them. If I Were You deals with some pretty tough subject matter and we want to be responsible about that aftercare. So are there workshops in schools before and/or after? How could this tie in with the already-packed curriculum in a way that doesn’t create extra work for already overworked teachers? Working in schools was probably the most challenging aspect of the project, just because sadly there isn’t much time for kids to be doing activities that aren’t directly focused on an exam. Ideally the next phase would be to work closely with three schools to co-design and iterate a programme that sits around the experience – and to find a language to talk about it that would help other schools see the benefits of what is a fairly unique offer.”

Rachel Briscoe, Fast Familiar (formerly fanSHEN)

@fastfamiliar

Background: Edinburgh University compared different types of intelligent recommendation algorithms on a citizen science platform to make it easier for citizen scientists to discover the projects that best match their interests and capabilities.

“In our experiment, we have tested personalised recommendations to users based on their past activities and engagement with citizen science projects. Similarly, personalised recommendations of care planning could be supplied to patients and practitioners based on patients’ past care planning and their specific requests.”

Naama Dayan, University of Edinburgh

Upcoming report

To learn more about the results of the first round of collective intelligence experiments and how they can be useful for you, watch out for our upcoming experiment report which will be published on the Nesta website.