Illustration by Chen Wu

In May 2023, researchers from Singapore and Hong Kong announced that they could use brain imaging and AI to effectively read people’s minds. While the research had yet to be peer-reviewed by experts, it contributed to our growing understanding of how to probe people’s innermost thoughts, providing opportunities to improve lives but also raising questions about our right to think freely.

Neuroprivacy: safeguarding our cognitive liberty video still

Scientists from the National University of Singapore and Chinese University of Hong Kong have been able to reconstruct the basic content of videos from the brain activity of viewers, outperforming previous attempts with a 45% improvement in image quality.

Harnessing a combination of artificial intelligence (AI), to relate brain images to what people are seeing, and functional magnetic resonance imaging (fMRI), to measure brain activity, Mind-Video recreates the overall story and images from viewed videos with greater fidelity than ever before.

The research adds to our rapidly growing ability to read people’s minds using AI and brain imaging. Earlier in the year, scientists at Japan’s Osaka University, demonstrated a similar feat using static images rather than video.

Imaging machines are currently bulky and expensive but the use of these instruments avoids the need for more invasive probes that might be less appealing to patients and consumers.

The market for mind reading

While we are unlikely to see these technologies on the high street anytime soon, there are already consumer-grade devices on the market that use different technologies to similar effect. Although generally less precise than their counterparts in the lab, they can give some sense of people’s overall mood. A headband that uses the electrical activity in the brain to detect fatigue in drivers is already on the market. And - in a different way - we have become accustomed to the fact that the algorithms deployed on today's digital devices, for example, to recommend shows on Netflix, can predict our behaviour giving companies and governments some sense of what’s going on in our heads.

Technologies that can understand our thoughts offer opportunities to improve welfare. People with spinal injuries might be helped to walk by boosting signals from their brains, and those who have difficulty with speech can be given a new voice. Beyond medicine, these technologies can help workers improve their concentration or help gamers control characters on screen.

Sleepwalking into surveillance

Yet there are also applications that might raise serious concerns. A primary school in China had to halt the trial of a head-mounted device that tracked the attention span of students following parental concern about surveillance. Police forces in India, Singapore and Florida have already used or purchased ‘brain finger printing’ technologies, some of which have questionable scientific merit and might infringe on the right to remain silent. And in what might be seen as a stunt, in 2021, brewing company Coors teamed up with dream researchers to use technology to steer willing sleepers toward thoughts of beer.

Faced with rapid technological progress, experts such as human rights lawyer Susie Alegre and academic Nita Farahany are sounding alarm bells about our right to free thought. The 1948 UN Universal Declaration of Human Rights already enshrines freedom of thought as an absolute right. But it was only in 2021 that the UN gave this right substantial consideration and started to put flesh on the bones of the idea by describing details like its possible attributes.

Trying to get inside people’s heads isn’t new. Ask politicians, advertisers or even the Spanish Inquisition. But new technologies are increasing our capabilities in this area and once we lose control it might be hard to get it back. We could easily sleepwalk into a future where mind reading becomes as common as tracking children on their way to school through their phones. In 2024, governments might therefore take bigger steps toward protecting us from the risks of technologies so that we can realise the benefits more safely.