20 questions for public sector use of algorithmic decision making

In a previous blog, I invited feedback on ten principles for a Code of Standards for public sector use of AI in algorithmic decision making.

Getting the right approach to AI in a broader sense appears to be of rising importance for many individuals and organisations, most recently with Google’s CEO Sundar Picha publishing a set of principles for the company. I’ve chosen to focus on algorithmic decision making specifically as I believe it’s one of the applications of AI that is likely to pose the greatest potential benefits and risks for public sector organisations.

In this article, I want to recap why I think some form of standards or guidance is needed; explore the feedback I received; and then outline how and why I've converted them into 20 Questions that public sector organisations should be able to answer before using algorithmic decision making in a live environment.

The need for standards

Public sector officials have been making decisions and taking actions based on simple algorithms - following written rules or the outputs from basic software - for years.

Why the need for any new standards?

To my mind (and many others’), the added sophistication and complexity that comes with using Artificial Intelligence to create and train those algorithms changes the game.

Three broad categories of concern arise:

First, there are concerns about the inevitable limitations and biases of algorithms:

  1. Virtually all algorithms contain some limitations and biases, based on the limitations and biases of the data on which they are trained.
  2. The assumptions on which an algorithm is based may be broadly correct, but in areas of any complexity (and which public sector contexts aren’t complex?) they will at best be incomplete.

Second, there are concerns about the way algorithms might be used:

  1. Algorithms can (and have been) used in inappropriate contexts, such as companies using job applicants’ credit scores to determine whether to hire them.
  2. Algorithms may be deployed without any human oversight leading to actions that could cause harm and which lack any accountability.

Third, there are concerns about algorithms’ opacity:

  1. The code of algorithms may be unviewable in systems that are proprietary or outsourced.
  2. Even if viewable, the code may be essentially uncheckable if it’s highly complex; where the code continuously changes based on live data; or where the use of neural networks means that there is no single ‘point of decision making’ to view.

If these concerns are even slightly correct, I’d suggest some basic conclusions can be drawn about the use of algorithms in informing decisions or taking actions in a public sector context:

  1. There are relatively few instances where algorithms should be deployed without any human oversight or ability to intervene before the action resulting from the algorithm is initiated. The issues the public sector deals with tend to be messy and complicated, requiring ethical judgements as well as quantitative assessments. Those decisions in turn can have significant impacts on individuals’ lives. We should therefore primarily be aiming for intelligent use of algorithm-informed decision making by humans.
  2. If we are to have a ‘human in the loop’, it’s not ok for the public sector to become littered with algorithmic black boxes whose operations are essentially unknowable to those expected to use them.
  3. As with all ‘smart’ new technologies, we need to ensure algorithmic decision making tools are not deployed in dumb processes, or create any expectation that we diminish the professionalism with which they are used.

Why it's worth it

Given all these concerns, it’s worth pausing to ask why the use of AI for algorithm-informed decision making is desirable, and hence worth our collective effort to think through and get right. I think there are at least three good reasons. If used thoughtfully and with care, algorithms could:

  1. Codify best practice and roll it out consistently at scale. In Ray Dalio’s fascinating book, Principles, he describes how the firm he founded, Bridgewater Associates, take the very best of their processes and convert them into algorithms so they can be applied consistently again and again. The algorithms can be fine tuned over time to produce ever improving results.
  2. Remove the worst cases of human bias. Though there is currently much debate about the biases and limitations of artificial intelligence, there are well known biases and limitations in human reasoning, too. The entire field of behavioural science exists precisely because humans are not perfectly rational creatures but have predictable biases in their thinking. Algorithms could help remove or reduce the impact of these flaws.
  3. Enable evidence-based decision making in the field. We often talk about the need for evidence-based policymaking. However, it’s all well and good for politicians and policymakers to use evidence at a macro level when designing a policy, but it will fail to achieve much if poor decisions are taken at the front line. The real effectiveness of each public sector organisation is the sum total of all the thousands of little decisions made by their staff each and every day. The promise of AI is that we could have evidence-based decision making in the field: helping frontline workers make more informed decisions in the moments when it matters most, based on an intelligent analysis of what is known to work.

Your feedback

And so to your feedback. I can’t hope to cover all the points that were made in just one blog, but below I discuss some of those that were made most often and which stood out to me.

What was striking about the comments was their polarised nature. Some claimed the principles did not got far enough; others claimed the code would stop any public sector organisation trying anything with AI.

Some of those offering the latter view said the code was like the requirement to wave a red flag in front of the first automobiles - shorthand for an unnecessary and overly-restrictive regulation.

Let’s consider this comparison for a moment.

In the literal case of the person waving a flag in front of a car, I’d argue that rather than being an example of excessive regulation, it supports the rationale for codes for new technologies.

When cars were first invented, people were understandably uncertain about their safety, and so measures were put in place to counter the perceived risks until drivers gained confidence, and pedestrians felt that the risk presented was acceptable.

The flag waving was never going to be a permanent feature. It has been replaced by hundreds of rules and regulations that ensure safe driving conditions; proper vehicle functioning; and much else besides. The point is that codes may initially take a cautious approach, but are necessary until we build trust and confidence in the new technology.

I feel much the same applies to public sector use of AI in algorithmic decision making. I fully expect our attitudes and codes of practice to evolve over time. But if we make a major blunder early on, we risk undermining confidence and setting back its beneficial uses by years.

Given the public sector’s track record of misjudging public opinion on previous data and technology initiatives (think of ID cards; Care.Data, etc.) I don’t think this concern is unwarranted.

Others wondered why we should be expecting higher standards of algorithms than we do of human processes and existing forms of decision making.

Tom Forth, for example, pointed out that some of the principles would be equally reasonable requests of ‘non-algorithmic decision making’. He rephrased principles 1 and 2 to illustrate his point:

  1. Every policy decision taken by a public sector organisation should be accompanied with a description of its objectives and intended impact.
  2. Public sector organisations should publish details describing the assumptions used making every policy decision, together with a risk assessment for mitigating potential biases.

It’s a good and fair challenge.

Certainly we might want to look at setting higher standards than currently exist for more traditional forms of decision making - and we could reasonably ask many of the same questions that we suggest posing of algorithms.

However, I think a case can be made that algorithm-informed decision making is relevantly different, and hence requires a new approach, albeit one that needs to evolve over time.

That’s for four main reasons.

First (to repeat a point made above), with new technologies we may need to set a higher bar initially in order to build confidence and test the real risks and benefits before we adopt a more relaxed approach. Put simply, we need time to see in what ways using AI is, in fact, the same or different to traditional decision making processes.

The second concerns accountability. For reasons that may not be entirely rational, we tend to prefer a human-made decision. The process that a person follows in their head may be flawed and biased, but we feel we have a point of accountability and recourse which does not exist (at least not automatically) with a machine.

The third is that some forms of algorithmic decision making could end up being truly game-changing in terms of the complexity of the decision making process. Just as some financial analysts eventually failed to understand the CDOs they had collectively created before 2008, it might be too hard to trace back how a given decision was reached when unlimited amounts of data contribute to its output.

The fourth is the potential scale at which decisions could be deployed. One of the chief benefits of technology is its ability to roll out solutions at massive scale. By the same trait it can also cause damage at scale.

What I've done

Taking on board the point that we need to get the right balance between standards that offer protection, whilst also helping organisations to innovate, I’ve adapted, refined and rephrased the principles into a set of questions.

These questions incorporate many of the other numerous points of feedback I received, which seem better geared towards featuring in a more practical tool.

The questions are those that any public sector organisation should ask themselves before deploying an algorithm in a live environment, whether in-house or from an outsourced provider. They are intended to ensure organisations have thought through:

1) The algorithm’s purpose and appropriate circumstances of use

2) The outcomes the algorithm intended to make possible (and whether they are ethical)

3) The algorithm’s function

4) The algorithm’s limitations and biases

5) The actions that will be taken to mitigate the algorithm’s limitations and biases; and

6) The layer of accountability and transparency that will be put in place around it.

Organisations must use their own professional judgement - and the requirements of relevant legal frameworks - to decide how detailed their responses need to be, what ethical models are most appropriate, and who needs to see their answers, based on the specific context in which the algorithm will be used.

However, I believe our standard should be this: that public sector organisations should not use algorithmic decision making in a live environment if they cannot sufficiently answer one or more of the 20 questions outlined at the link below.

View the 20 questions here.

Do you agree?

I invite you check out the questions, and offer your feedback by commenting in the document or via Twitter.

Find Eddie on Twitter

Author

Eddie Copeland

Eddie Copeland

Eddie Copeland

Director of Government Innovation

Eddie was Nesta's Director of Government Innovation.

View profile