A day in the life of the Department for Democratic Artificial Intelligence

An excerpt from Harry Farmer’s contribution to our Radical Visions of Future Government collection.

Harry Farmer’s contribution presents a ‘day in the life’ of 2030’s Department for Democratic Artificial Intelligence, imagining steps taken by the government to regulate and influence the use of AI. The piece forewarns us of the difficulty in creating a code of AI ethics which quickly breaks down as it is applied to specific situations in our everyday reality. In so doing, the piece offers the government of today the chance to anticipate where nuance and care must be taken.

2:30 pm – Offices of the Secretary of State for Democratic Artificial Intelligence and Automated Systems

“Tariq, get in here now!”

“Yes, Minister?”

Tom had been Secretary of State for Democratic AI for almost six months, but still hadn’t quite grown used to its permanent secretary’s ability to appear silently within seconds of being called.

“Have you seen the memo Freya just sent over – the one going over key changes to our regulatory principles since last year?”.

“Of course, Minister.”

“Well, how the hell am I supposed to justify some of these to the PM? I’ve got an interview with Theo Ashby from Youtube in four hours, what am I meant to say to him?”.

“What specifically was bothering you, Minister?” Tariq asked. It was Tariq’s job to understand the Minister’s brief so he didn’t have to, but half a year in, Tom’s seemingly wanton ignorance of the nuances and paradoxes of the Department was becoming wearing.

“What’s bothering me is that almost half of our new policies are totally inconsistent with our existing ones. The main promise we made to industry when we set this place up was that even though the regulation we imposed would be onerous, we’d provide certainty. We said AI business would know where they stood.”

“Until this morning, our position on AI paternalism – so carebots, personal avatar assistants, semi-autonomous exoskeletons, God knows how much else – was that a system can go against the stated wishes of its user if it’s necessary to prevent clear and immediate physical harm to that person, or harm to others that would follow as a result of the AI’s action – but not its inaction… I’m paraphrasing, obviously.”

“Yes, Minister.”

“So look at what she’s just sent me.” Tom gestured the text on his tablet up onto the wall and circled a paragraph. “She says this year’s citizen councils have almost completely reversed this position. Assistive AIs basically can’t intervene now – practically the only exception is that they can’t help you to commit suicide.

“If I were the CEO of one of these companies, I wouldn’t know where I stood. Hell, I’m the Minister of the department that makes the rules and I couldn’t tell ‘em where they stand. How do I justify this? We can’t have our regulatory position change every bloody year.”

“Well, Minister,” Tariq began, carefully. “It’s a different set of citizen councils to last year. They can’t be expected to come to the same conclusions.”

“I still don’t get why the can’t use the same bunch of people every year,” Tom replied. A bit of change I understand, but a one eighty pivot on such an important principle? The rules are changing almost every year. It’s just not acceptable.”

“There’s every chance things will settle down, Minister. If we know anything, it’s that people aren’t sure how they want AI to behave; these questions really are difficult. Right now, members have got very little to go on – AI morals have barely been regulated for four years now, and for the first two, nobody really knew about it.

“Future members will go in knowing full well what previous councils have decided. Given the huge levels of responsibility placed on them, by far the easiest thing for them to do will be to agree with what’s come before. That way, if they get it wrong, they won’t be the only ones. If five previous councils have decided on a set of principles, you’ve got to be damn sure of yourself to suggest something different. Give it a couple of years and you’ll get your stable regulatory environment.”

Tom pondered this for a moment. It would have been more comforting if he’d had any intention of being at DDAI for anything close to two years.

“That’s all very well, but how is that meant to help me now? I can’t say that tonight.”

“I’ve prepared you some talking points that should buy you some time before this kicks in. They should be in your Red Box.”

Tom glanced down at his tablet, opening up the Red Box folder. “Okay, I’ll read this now. Thank you, Tariq.”

“Of course, Minister.”

Tom looked up to smile at his Permanent Secretary, but he was already gone, the door closed silently behind him.

Explore a selection of the other contributions as part of our Visions of Government 2030 feature.


Harry Farmer

Harry Farmer

Harry Farmer

Senior Policy Adviser, Inclusive Innovation

Harry worked to develop and advocate for policies to make the UK’s innovation economy fairer, more inclusive and more conducive to the public good.

View profile