AI’s future… shocking or shaping?

Could participatory approaches using augmented reality, online gaming and wacky humour be the new frontline in helping people reimagine the future of artificial intelligence?

Half a century ago, Alvin and Heidi Toffler coined the term future shock: that dizzy, destabilising feeling too much is changing too quickly. Lost, frozen, powerless, you just can’t keep up.

And that was in an era when you had to feed punchcards into a computer the size of your house just to do simple maths! Fast forward to today’s rapidly shifting world — aren’t many of us in shock now?

One of the big areas changing by the day (millisecond?) is artificial intelligence (AI). We barely know how quickly it’s developing, yet nowhere do we need society more engaged. As AI transforms how we live, love, work and communicate, in every society we urgently need ways to engage with what’s happening and shape a future we actually want.

But how can that happen in practice? The news is making us scared, angry, uncreative and shallow and how many people have the time to read a 100-page report?

We need different approaches that meet modernity with modernity, learning from what we now know about psychology. We must recognise that people learn better by experiencing something themselves and getting the chance to reflect on it. We need to get serious about not being serious — grasping that humour can be extraordinarily powerful. We need to get out of the staid rationality of the policy world and engage with emotions.

We need to put people in the driving seat themselves. Not talk ‘at’ people but involve them in a participatory way in raising awareness, experiencing AI for themselves and deciding for themselves how they want it to be shaped in the years ahead.

Our three new projects aim to do just that, as part of a cohort supported by Mozilla to find creative ways to engage people in shaping a future for AI that’s fair and human.

Your new virtual assistant will see you now

Screenshots from A Week With Wanda

Meet Wanda — a spoof virtual assistant in the mould of Siri or Alexa, who is here to “make your life better”. Web-based game A Week With Wanda is a humorous exploration of the dark sides of artificial intelligence unleashed on your life.

Each day Wanda offers to do something new for you, to improve your health, wealth or relationships. But Wanda’s efforts quickly become dodgy… or downright deranged!

Through quirky online chats, you might learn that Wanda has signed you up for therapy; sold your location data; reported your ethnic minority friends to the police; stalked a potential “new best friend”; and even deepfaked you into a pornographic video… (all are just simulations). The hundreds of different combinations of experiences are drawn from real developments in artificial intelligence today — as is revealed to players of the game.

You can argue with Wanda about what she’s doing, and at the end of your week you can join other users in sharing your views about the future of AI you want: how can it be a force for good? what values should AI be based on? what limits should it have? These might sound like big questions, but ordinary people everywhere have powerful answers.

The project’s creator is Joe Derry Hall, a communications creative living in London whose work revolves around reimagining the future. He was first inspired to initiate the project after hearing about racially biased AI in criminal sentencing. Joe is now compiling people’s responses to share with tech companies and policy makers in the UK and beyond.

Feelings thief

stealing your feelings

Ever taken a selfie? Did you know it could be used by companies like Snapchat to track your emotions and trade the insights without asking you?

Stealing Ur Feelings from New York-based Noah Levenson is an innovative interactive documentary that reveals how your favourite apps can secretly use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilise democratic institutions.

Shown at film festivals internationally and on the web, the experience combines augmented reality, filmed content and interactive game mechanics to explain the science of facial feature tracking. It demystifies the algorithms that determine if you're happy or sad, and reveals tech corporations' patented plans to make billions by analysing your reactions at public events.

Noah, a programmer and former TV executive, creates work focussing on the intersection of entertainment and technology, how they are distributed and monetised - often against our own best interests. Set against the backdrop of Cambridge Analytica and the digital privacy scandals rocking today's news, Stealing Ur Feelings is intended to be a fast, darkly funny, dizzying unveiling of the "fun secret feature" lurking behind all of our selfies... one that has the power to bias social systems, influence elections, and alter our world in dangerously unpredictable ways.

It is debuting online alongside a petition from Mozilla to Snapchat. Viewers are asked to smile at the camera at the end of the film if they would like to sign a petition demanding Snapchat to publicly disclose whether or not it is already using facial emotion recognition technology in its app. Once the camera detects a smile, the viewer is taken to a Mozilla petition, which they can read and sign.

H is for human, R is for… racist?

Screenshots from survival of the best fit

You’re the CEO of a rapidly-growing tech start-up and you just secured $2 million to expand your company. Congratulations!

To hire staff more quickly, you decide to automate the hiring process and use a machine learning algorithm to review CVs and identify the best candidates. But you quickly learn that replacing HR with AI doesn’t always mean better results — in fact, it can worsen existing bias against underrepresented applicants.

This is the premise behind Survival of the Best Fit, an interactive online game that explains how machine learning software can be unfairly prejudiced when used carelessly to automate hiring decisions. It’s the brainchild of an international group of software engineers, designers and technologists who met at NYU Abu Dhabi on a ‘Politics of Code’ course, prompting them to dig further into the ethics of AI.

In the game, players take on the responsibility for training and deploying their hiring algorithm. Along the way, they learn how data sources that aren’t diverse, a lack of human supervision, and the impenetrable ‘black box’ of machine-learning can lead to gender, racial and other discrimination.

This is a problem that has already hit Amazon, one of the biggest tech companies in the world — so it could happen anywhere. As part of a participatory approach to combating this, the team behind the game are planning to work with universities to skill up students who will be the next generation of tech creators and policy makers, to get them thinking early about how to avoid all kinds of bias in AI.

Open sesame

The developments in artificial intelligence shaping our world are so often happening behind closed doors, in hidden lines of code.

Through creative approaches like ours and many others from maps to monsters to making naughty drawings disappear, we hope we can open people’s eyes to the realities — and open people’s minds to the AI future they want.

Author

Joe Derry Hall

Joe is a communications creative whose work revolves around reimagining the future.

Noah Levenson

Noah is a Rockefeller Foundation Bellagio Resident Fellow on artificial intelligence, a Mozilla A.I. awardee, and a former MTV executive.

Gabor Csapo

Gabor is a data science enthusiast and aspiring hardware hacker at Google Taiwan

Miha Klasinc

Miha is a creative developer at North Kingdom Design & Communications in Sweden.

Jihuyun Kim

Jihyun is a product engineer at Narus, a legal-tech startup in Singapore

Alia ElKattan

Alia is an NYU Abu Dhabi student of computer science and politics from Cairo, Egypt.