The penetration of artificial intelligence (AI) into almost every aspect of our lives has been rapid and transformative - but also, all too often, covert, unregulated and irresponsible.

On the web, AI decides the adverts we see, the clothes we buy, and the tweets we read. But it also plays a growing role in the offline world — in recruitment, criminal justice, medical diagnostics, social care, credit ratings, stock market trading, law, accountancy, and many more fields. The list will only continue to grow.

Unfortunately the list of negative consequences of AI is also growing, and is exacerbating problems around misinformation, echo chambers, bias, sexism, racism, socioeconomic discrimination, human rights abuses, inequality, poverty and surveillance. We live in a world dominated by AI, but without the societal, legal or regulatory structures that the world needs to protect us from its worst effects.

In 2019, I predict that people will start fighting back. We’ll demand to know when we’re talking to machines, and when we’re talking to humans. We’ll demand to be told when decisions that affect our lives are being made, or informed, by algorithms. And we’ll demand information on how those algorithms have reached their decisions.

Two chat bubbles face each other, one with a robot antenna.

Pervasive influence

Many of the interactions we have with AI are clear and explicit - when we ask Siri to set an alarm, for example, or let Gmail write our emails for us. But serious issues arise when we don’t know about the role of AI, either because we’re not told or because it’s actively hidden from us.

Chances are you've faced issues like this yourself: perhaps you've spoken to an online customer support officer, and wondered about whether you were speaking to a human or a bot. Or perhaps you've applied for a dream job and speculated whether an algorithm was involved in sifting candidates, and rejected you for an interview.

You may well not have dwelt on these issues for long. While think tanks, researchers, policymakers, journalists, consumer groups and activists are already playing a frantic game of catch-up to scrutinise algorithms and understand how we can mitigate their risks, the general public hasn’t yet demanded much—blissfully (or perhaps wilfully) ignorant of AI’s role behind the scenes. We unquestioningly accept the decisions handed to us by our banks and insurers, the organisations we want to work for, and our public services.

This will change in the year to come, as public awareness of the all-pervasive role of AI grows, and coverage increases in the media. Landmark cases of AI bias and discrimination will make it to the courts - and to the headlines. The number and importance of decisions made by AI will continue to grow, and people will increasingly demand more transparency and accountability.

Identification by default

We’ve already started to see the beginnings of the fight for more responsible AI. Google, for example, faced a very public backlash after launching its Duplex software with a frightening example of an unidentified robot booking a hair appointment on the phone, and rapidly announced it would make the system identify itself as a bot by default. The US-based AI Now Institute is collating court cases involving algorithmic decision-making, and examining litigation strategies for challenging its use.

But these examples are few and far between. In part, this is because the onus for demanding transparency still remains with individuals on a case-by-case basis. Even under the most forward-thinking data law in the world today—the EU’s General Data Protection Regulation (GDPR)—individuals have to actively exercise their rights to know how organisations are using their data in automated decision-making, or to object to that decision-making. Despite being less than a year in force, the GDPR is already in many ways obsolete; in reality, it’s impossible for an individual to know where and when their data is being used directly or indirectly, and many algorithms are now making decisions beyond the realm of human understanding.

Furthermore, as Sandra Wachter and Brent Mittelstadt at the Oxford Internet Institute have argued, current data protection law does not aim to ensure the accuracy of decisions involving personal data, or to make these processes fully transparent, leading to calls for a (not-so-catchy) “right to reasonable inference”. Leading AI expert Kate Crawford has also argued that individuals should have more rights to see data, understand processes and challenge decisions made by algorithms.

Holding the powerful to account

In 2019, we’ll see concerted action so that the citizen’s “right to know” becomes the organisation’s “duty to inform”.

Whether as customer, service user, job applicant, employee, or even criminal suspect, we’ll demand to be told, as default, when AI is playing a role in the decisions that affect our lives, and what that role is. We’ll ask companies and public services to make clear who’s responsible and accountable for decisions outsourced to machines. And when biased, unfair or discriminatory decisions are made, we’ll demand that people are held responsible and accountable.

But citizen action will only take us so far. Governments will also need to demand transparency, responsibility and accountability of companies, public sector and civil society organisations who are using AI. Over in the US, California has already passed a law requiring bots to identify themselves online, although only in limited commercial and political circumstances. On this side of the pond, France’s Digital Republic Bill requires the government to publish source code for its algorithms, while we can expect the European Commission to continue its world-leading fight for stronger data protection laws and more responsible technology.

The process of safeguarding our rights in this area will be long and hard. Companies will fight to keep their algorithms, data and knowledge to themselves. Critics will argue that unnecessary regulation slows down innovation and efficiency. Governments will argue that self-regulation is the way forward. Most worryingly, people may well just accept irresponsible, opaque and unethical AI as a part of life in the twenty-first century.

Ultimately, though, if we are serious about our digital rights, about tackling inequality and structural injustices and about holding organisations to account, this is a battle worth fighting, and one which will ensure that the digital revolution benefits everyone.

Matt Stokes is a Senior Researcher working on the collaborative economy and digital social innovation.