AI is learning to understand your inner mental world in ways that were never possible before; this has the potential for improving detection and treatment of mental health problems, but the potential for abuse is already a concern.

While concerns about privacy are familiar in the digital age, so far most of these have been extensions of familiar, even ancient forms of surveillance. Eight thousand years before we all conveniently carried location-tracking phones in our pockets, ancient empires took regular censuses to track the movement and growth of populations. In the 20th century, authoritarian regimes often hired people to simply follow suspects around (by the time the Berlin Wall fell in 1989, the Stasi employed around two per cent of the entire East German population). Digital technologies made familiar privacy invasions much cheaper and faster, allowing the kinds of insights that previous generations of spies could only dream of.

Now a new dimension of surveillance is becoming mainstream. Artificial intelligences are being trained to uncover aspects of our inner lives - the mental and emotional experiences that were previously kept private within our own minds. Those building and deploying these systems hope to uncover our thoughts, feelings and levels of stress, to tell whether we are lying, or daydreaming, and, most significantly, to predict our mental health outcomes.

Some of these tools have been available, though expensive, for a while. People developing adverts and movie trailers have been using fMRI, EEG, galvanic skin response, eye-tracking and other biometric approaches to monitor the emotional reaction of test audiences for years. But this is unwieldy and expensive. Recently companies have started to use AI to monitor test audiences’ emotions in much more detail through a simple webcam.

As these technologies become more accurate, more affordable and more easy to use they are likely to be embedded in more aspects of daily life - at work, even in public. They plug in to the cameras common in our homes, offices and streets.

How artificial intelligence reads your face

Artificial intelligences have been able to differentiate fake from genuine smiles better than humans since 2012. AI lie detectors learned to outperform the best human interrogators in 2015. The latest technology, emerging in 2017, is able not just to differentiate a real smile from a fake one, but to differentiate between the fake smiles of people with mild depression and those with suicidal ideation. Even if you can carefully keep your expression neutral, new tools can analyse blood flow under the skin of your face to capture your heart rate and blood pressure, giving a rich picture of emotions, stress and fatigue. All of these systems work on the video feeds of cheap, off-the-shelf webcams, even those small enough to be disguised as the head of a screw.

These tools have enormous potential for social good but also for societal harm. Systems that can predict suicidal thoughts or psychotic episodes could help the most vulnerable members of society access timely help. But the same tools could also lead to speculative diagnoses in unsuitable circumstances; judgements made by amateurs using open tools, deliberate discrimination, or targeting vulnerable people with inappropriate advertising and messaging.

Whatever direction the technology takes, it is likely to accelerate fast. New facial analysis and emotional surveillance technologies could easily integrate with existing surveillance infrastructure and industries hungry for emotional data. Already, webcams are embedded in many of the digital advertising screens found on city streets and train stations. They take pictures of us without our knowledge, and within seconds capture details including, in one company’s literature, “Facial features e.g. glasses, beard, mood, age and gender.”[1]

A crashed advertisement reveals the code of the facial recognition system used by a pizza shop in Oslo... pic.twitter.com/4VJ64j0o1a

— Lee Gamble (@GambleLee) May 10, 2017

This ‘mood’ component already includes analysing the length and quality of our smiles. While EU law prohibits these systems from storing the images for more than a few seconds, they can be used to target specific populations.

Capitalising on insecurity: Surveillance crosses the line

The potential for misuse is profound, and early signals are disturbing. On social media platforms, emotional targeting is already being deployed. Facebook has told advertisers it can identify when teenagers feel “worthless,” "insecure," "defeated," or "anxious" and specifically uncover their insecurities - such as “body confidence”. This potentially allows groups buying advertising or pushing messages to precisely target vulnerable young people.

Employers have demonstrated eagerness to gain insight into the inner lives of their staff. In 2016, these technologies were tested in an experiment tracking the emotions of financial traders.

“Imagine if all your traders were required to wear wristwatches that monitor their physiology, and you had a dashboard that tells you in real time who is freaking out”. - Andrew Lo

In years since, these technologies have become more affordable and thus attractive to other industries - including those whose staff may experience low pay and low job security, which can make it difficult for them to challenge the use of these tools. Start-up Humaneyz tracks the tone (but not content) of employees’ conversations through sensor-rich ID badges; who you speak to, for how long, how casually or aggressively is monitored to map activity and networks and optimise team performance. Workplace conversations on Slack can be analysed by AI service Vibe for your “Happiness, irritation, disapproval, disappointment, and stress” and will send notifications to managers if an employee’s morale drops. Attention and eye tracking is used in job training; TSA recruits have EEG monitoring their attention levels and eye tracking to “provide unprecedented access to unobservable aspects of trainee performance.” [2] In the USA, biometric surveillance is often included in wellness programmes; while these are typically optional, they can be associated with significant bonuses and affect health insurance options.

In 2018, it looks likely that the spread of these technologies will only accelerate; that our emotions will be monitored, and the content and adverts we see will be optimised for the ideal emotional response. Spreading awareness and having frank, open discussions about the ethical implications is critical if this technology is to be used well.

1 Orbscreen
2 https://www.eyetechds.com/dhs-baggage-screening.html

Illustration: Peter Grundy​