Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that ought to be transparent more difficult than in the past. But which machines in which circumstances do we wish to make accountable, and why? Where should our concerns and efforts be focused?
Alison Powell argues that the effects of algorithms on communities and participation in civic life need attention, and that by using algorithms to keep algorithms accountable we might move towards a government which does not simply target consumer citizens, but supports decision-making for broad collective benefit.
The algorithms which have received the most attention from scholars so far have been those developed by media platforms, such as Facebook, whose main products are social networks and the attention of their individual users.
These algorithms construct portraits of individual users by collecting and analysing data on their patterns of behavior on the media platform. These portraits can be used to finely target products and services. There are real risks for individuals; price discrimination, discrimination and biases built into algorithmic systems for communicating and consuming. But these are, in my view, less inherently problematic than the algorithmic processes that impact on our collective participation in society and belonging as citizenship.
Algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken. This second sphere needs more scholarly and public attention.
Algorithms drive many of the successful business models of web content and media platforms. As users’ everyday experience of communication has moved online, their behaviours and interests have been exposed to data collection processes, which produce new resources- new information about consumption including detailed personal profiles.
This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data about individuals has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?
John Cheney-Lippold identifies how the categories of identity are now developed alorithmically. To an algorithm working online, gender classifications are not based on genitalia, but instead on patterns of behaviour that fit with the norms it has been trained to see. Cheney-Lippold find that these ‘algorithmic identities’, are much narrower than the range of identities that people perform across their lives. They are not capable of experimentation, remixing, blurring lines.
This is because many of the systems that inspired the design of algorithmic systems are based on profiles created by advertisers to optimise consumption; they prioritise attention to behaviours related to consumption, so do not get a full picture. This becomes more concerning when these narrower identities, built using systems developed for consumption, spread beyond adverts. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.
What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it is important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. For instance; with sensors on your garbage bins you don’t need to even remember to take them out, and the government knows if you overfill the bin or fail to recycle. With pothole reporting platforms like FixMyStreet a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?
Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community such as the library service?
Neoliberal, consumer citizenship propped up by algorithmic systems of sorting and service delivery further alienates us from the shared virtues of citizenship, and the construction and nourishment of shared experience.
We need to expand attention from individual risks and consider the accountability of the whole system, its entire undertaking, and its effects on the community. Some of the proposals discussed at our workshop included having machine learning processes verify the outcomes of algorithmic decisions and provide transparency. To me this appeared as an especially accountable version of bureaucracy, where results from each system’s accounting dynamically report up through an iterative (but still accountable) chain of command. This is not bureaucratic in the sense of inventing process for its own sake, but it is bureaucratic in the sense that it establishes many processes of accountability that are the responsibility of entities who report to one another through a structure where trust is related to the capacity to validate decisions.
This is not 19th century bureaucracy, but a cybernetic relative that is still emerging. Although cybernetic systems have always tried to redistribute responsibility and operate in real time, their expansion has also created some of the issues of accountability we are now trying to solve. In many ways, bureaucracies are useful. They contain processes of decision-making and structured divisions of function Much like algorithms. They are criticised for becoming voluminous, disappearing under process, and becoming opaque. Much like algorithms. Bureaucracies, in some cases, can efficiently undertake processes that have undesirable or devastating outcomes (including war and discrimination). Much like algorithms.
I am not sure that we should be so afraid of considering the bureaucratic aspects of using machine learning technologies for accountability, not when we have over a hundred years of social science knowledge on the problems to avoid. I also wonder whether bureaucracies might also help to solve the other problem I evoked earlier – whether thinking specifically about how human and computational decisions might move us away from a citizenry of alienated individuals and towards a reconception of the shared virtues of belonging together to a place.
Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers.