Town Hall 2020: should AI run your local council services?

Machine learning is used to identify patterns in data that might be invisible to humans. It has been used for many different purposes in commercial contexts for some time; for identifying unusual patterns of behaviour that indicate fraud or cyberattacks, speech recognition, predicting who might get sick or what online adverts people are likely to click on, determining credit scores and deciding whether to offer loans. As we continue to collect ever more data and the tools become more accessible and powerful, the potential applications seem endless.

Now local governments are turning to machine learning in the hope of improving decisions and services from the everyday routine of rubbish collection to the provision of critical health and fire services, and in the sensitive life-changing decisions of social care, policing and children’s protections.

But alongside opportunities for more effective and efficient services, using these new technologies in local services involves real risks and complicated practical and ethical questions, some of which we have begun to explore in our series on living well in the shadow of the smart machine. Communicating these sensitive issues to the public poses challenges, but it also offers an extraordinary opportunity to reflect on governance and new technologies and to explore methods of decision-making.

Prototyping pathways through complicated conversations

With Michael Veale from UCL STEaPP, we started exploring ways to stage experimental, accessible public discussions about machine learning in local government with Townhall 2020: Should AI run your local council services. It was a first step in developing workshops which, we hoped, could facilitate nuanced debate between people with different areas of expertise and different opinions of the technology, practical and political issues, and which could avoid being dominated by single issues. Our first prototype ran at Mozfest, the world’s leading festival for the open internet movement. The festival was a perfect setting, with our participants still reeling from the excitement of the science fair, jostling up against the excitement of new high-tech prototypes and deep discussions, fun workshops, and surrounded by optimistic, creative ideas for how to improve the technologies and politics of the web.

We split the time between small group discussions and a full group semi-fictionalised town hall debate, so that small groups could develop questions and priorities before role-playing a real discussion about the future of MozfestTown. We learned a huge amount and were delighted to hear from participants that they were left thinking differently and more deeply about the technologies and different options for the future development of services, and that they were excited to learn more. We were left with even more questions, optimism and a sense of urgency for developing spaces for these discussions.

The challenge

The debate requires creativity and ambition because the questions are complicated, answers will have to draw on many different fields of knowledge, and there is risk of losing nuanced debate if single short-term issues or improbable ethical quandaries dominate.

The decisions made by computers are often assumed to be neutral and objective, but algorithms are designed by people, and the datasets they learn from were made by people; people decided what information needed to be collected, and what the most important questions were. This means systems can encode the biases, prejudices, and assumptions of the people who created and shaped them. Even if the people implementing these systems are keen to avoid them being unfair, it can be hard to consider all angles and to bridge the social and technical gaps with the tools and methods available.

Even with deep and varied expertise, and recognising the importance of checking for bias there is still room for discussion. Some apparently technical questions carry heavy political weight. For instance, deciding how many data sources to draw on can effectively mean asking whether it is more important for a decision-making system to be more accurate or more understandable?

There are many things to consider. Using more data sources might help correct for historical biases in any individual dataset, making decisions more accurate and less likely to reflect problematic prejudices. But using more data sources also makes it more complicated for a human observer to understand how a decision is being reached; it becomes impossible to understand how each data point has influenced the decision. Using more data sources also means owning and holding more information which could be vulnerable to security attacks or (if it involves personal data) breaches of privacy.

When these decisions involve the provision of healthcare, child protection, education, and other fields where lives might be at stake, these questions need to be unpacked and seriously considered.

The future

Machine learning is a flexible technology that can be used for many purposes, and could adapt to hugely different ideas of how governments should make decisions, relate to citizens and deliver services. The public's understanding and trust of the technology has been undermined by confusing approaches used by commercial companies, where consumers' data is collected and mined for patterns without full understanding.

There is an enormous opportunity in using machine learning as part of developing people-powered, knowledge-powered services, where experimentation, learning, participation and deeply informed consent are built into public services as described in our optimistic, plausible future vision of the health service in NHS 2030. But this requires building understanding and trust.

Government’s decisions are, rightly, held to high standards of accountability and justice because the impact of their successes and failures on communities and individual lives is potentially enormous. As decision-making methods change and government responds to opportunities for improvement, communities deserve to be engaged in that process. We look forward to exploring the space further and would be delighted to hear from anyone interested in getting involved.

Many thanks to those who participated and to our facilitators Chris Adams, Conrad Moriarty-Cole and Katja Bego, and to Ian Forrester, Space Wrangler for 'Dilemmas in Connected Spaces' the excellently named space who hosted us so generously.

Image CC-BY-NC license Ian Forrester.


Lydia Nicholas

Lydia Nicholas

Lydia Nicholas

Senior Researcher, Explorations

Lydia was a senior researcher in Nesta’s Explorations team, focusing on how minds, systems and technologies come together to perform better at complex challenges with particular focus …

View profile