About Nesta

Nesta is an innovation foundation. For us, innovation means turning bold ideas into reality and changing lives for the better. We use our expertise, skills and funding in areas where there are big challenges facing society.

Disinformation is already a major force in politics and it now looks certain to be an enduring one as research shows it’s extremely difficult to identify, debunk, and persuade people of its falsehood.

Governments’ efforts to tackle fake news often prove counterproductive and too late to be effective. But there are glimmers of hope. One of which is ‘pre-bunking’ that aims to prevent disinformation taking hold by warning audiences about likely false narratives - debunking them before they’re encountered organically.

This naturally begs the question of how we predict conspiracy narratives will take hold in future. On this point the potential capabilities of AI holds out some promise.

AI models are creating new works of art based on existing creations and convincingly generating original texts. This cutting-edge tech could be deployed to simulate the paranoid corners of Reddit and Twitter. If it works, it would put us one step ahead of the curve in our pre-bunking efforts by anticipating likely future conspiracy theories before they take shape.

Of course we may find that even by modelling millions of conspiratorial rabbit holes, predicting which narratives will resonate remains challenging. Pre-bunking itself appears far less effective where conspiracies chime with already held political beliefs. Field experiments demonstrate that conspiracies will always carry great weight for a small population of highly susceptible users.

Like engineers testing a car in a wind tunnel, honing it for the real-world forces it will encounter, governments should start using AI to test their policies for the gusts and torsion of dangerous disinformation.

This suggests that, in addition to predicting how false narratives will take hold, we need approaches to policy development that price-in disinformation at the design stage. I propose a radical step: that governments “conspiracy test” proposals before launching them,

By exposing policy to the same conspiracy-generating AI, a ‘conspiracy impact assessment’ might help identify which elements of a policy and/or its communication is likely to ignite false narratives. This test should not serve as some sort of veto, but as a way of testing for details that bad actors will likely hang conspiracies on.

Like engineers testing a skyscraper in a wind tunnel, honing it for the real-world forces it will encounter, governments should start using AI to test their policies for the gusts and torsion of dangerous disinformation.

This article was originally published as part of Minister for the Future in partnership with Prospect. Illustrations by Ian Morris. You can read the original feature on the Prospect website.