The problem

Biases in the world, such as those based on race or gender, find their way into technology because of the biases of the people designing and programming them.

A prominent example of this issue is conversational assistants, such as Amazon's Alexa, Apple's Siri, or Google's Assistant, which are predominately modelled as young, submissive women. According to UNESCO's recent report "The Rise of Gendered AI and its Troubling Repercussions”, part of I’d Blush if I Could: Closing Gender Divides in Digital Skills Through Education, consumer technologies generated by male-dominated teams and companies often reflect troubling gender biases. This carries the risk of reinforcing gender stereotypes and normalising verbal assault and gender violence in human-to-human interactions. The reports authors highlight the following: “The subservience of digital voice assistants becomes especially concerning when these machines – anthropomorphised as female by technology companies – give deflecting, lacklustre or apologetic responses to verbal sexual harassment.”

An AI solution

The Interaction Lab at Heriot-Watt University aims to address gender stereotypes in smart assistants by designing, building, and testing new personas and adapting their responses to biased behaviour. As an example, research shows that currently smart assistants display a tolerant attitude towards requests for sexual favours from male users (“a digitally encrypted ‘boys will be boys’ attitude”) and a playful and at times positive response to overt verbal abuse. The project will focus on detecting abusive speech from users towards smart assistants and introducing mitigation strategies such as robust negative responses and labelling speech as inappropriate. While current smart assistants “reinforce stereotypes of unassertive, subservient women in service positions”, the Heriot-Watt team will also embed measures which discourage and prevent abuse into conservational assistant’s personality design.

As part of this project, the team will also conduct perception studies, to be released to academia and industry, and used to educate a wider audience. The AI for Good funding will help The Interaction Lab engage the public in the design of conversational AI, educate and raise awareness about voice interfaces and work with decision makers to ensure oversight. They will also develop and provide easy-to-use AI tools for the design of assistants which use Natural Language Processing and develop guidelines for system development. The team has started data collection with strategies to test and currently have one bot deployed to test on various online platforms.

This project is led by Verena Rieser, Professor of Computer Science at Heriot-Watt University with Amanda Curry, PhD student studying Ethical Social Voice Assistants. The project is partnering with SpeechGraphics, Cereproc, Alana.ai, Equate Scotland and Ethical Intelligence. The project has received funding from Nesta in Scotland’s AI for Good programme.