Human vs Machine: why we need to be thinking more about how humans are interacting with AI tools

Over the last few years, there has been robust discussion and debate about the technical challenges associated with AI tools being used in the public sector; for example, the challenge of algorithmic bias and a lack of algorithmic transparency. However, there has been much less conversation about the challenges associated with supporting public sector employees to use these tools as intended. This post argues that supporting a productive human-machine interaction is critical if we want AI tools to succeed, and offers some practical suggestions around how public sector organisations might provide this support.

Molly is a social worker in a London borough working in children’s social care. She works as part of the “Front Door team”, fielding calls from concerned citizens about children's welfare. Molly and her team act as a first contact point; it is their role to determine whether the call warrants further investigation or not. This is no easy task.

Molly is working under intense time pressures. Front Door Social workers across a range of London Boroughs have told me that they are expected to process between 10 - 15 cases per day; that’s almost two cases per hour.

Molly is also working with imperfect and incomplete information and is constrained by the limits of her brain’s processing power. Molly can draw on the information provided on the phone call and any case notes that exist. She may call the family to try to gather more information and may also call any other professionals involved in the child’s life. However the information that she has access to is limited, and time pressures mean that she has to make decisions quickly. Molly is making decisions under what behavioural psychologists call “conditions of uncertainty”.

How humans make decisions

In the 1940s, Herbert Simon coined the term “bounded rationality” to explain how humans, like Molly, make decisions in these kinds of situations. In his book, “Administrative Behavior: A Study of Decision-Making Processes in Organization” (1947), Simon writes:

It is impossible for the behaviour of a single, isolated individual to reach any high degree of rationality. The number of alternatives he must explore is so great, the information he would need to evaluate them so vast that even an approximation to objective rationality is hard to conceive.”

And so, he explains, people “satisfice”. Someone like Molly, when making her assessment of the child, would review the available information, process what she humanly can, and then rely on her intuition to find her way to a satisfactory - but not necessarily optimal - solution.

Simon falls into what has now been labelled as the Naturalistic Decision Making movement. This school of thought believes that, when humans satisfice, they generally apply expert intuition; an acceptable - and often quite productive - feature of human decision making.

In contrast to this stands the Heuristics and Bias approach, spearheaded by Kahneman and Tversky. According to this school of thought, humans tend to rely on irrational beliefs and biases to satisfice. An example includes anchoring bias, where people attach themselves to a specific piece of information and place too much emphasis on that to guide their decision-making process.

It is important to acknowledge that, while these schools of thought appear to sit in tension with one another, they agree on some fundamental ideas:

  • Naturalistic decision makers acknowledge that irrational biases exist and should be avoided.
  • Bias and Heuristics practitioners recognise that expert intuition exists and is something to be embraced.

Kahneman and Klein (2009) therefore describe the main difference between the schools as follows:

Members of the HB community… tend to focus on flaws in human cognitive performance. Members of the NDM community know that professionals often err, but they tend to stress the marvels of successful expert performance.”

As such, the main difference between these schools of thought appears to be whether they adopt an optimistic or pessimistic view around how humans satisfice. Both acknowledge that we want more expert intuition, and less irrational.

Algorithmic decision tools

In recent years, algorithmic decision support tools have been introduced to support the decision-making of public sector workers, like Molly, who are having to make decisions under conditions of uncertainty. This is because many studies have demonstrated that actuarial models are more accurate than clinical judgment in predicting risk.

Algorithms access and process vast quantities of data - far more than a human mind ever could - and use sophisticated statistical methods and machine learning to analyse probabilities.

The information analysed by the algorithmic tool is presented to public sector workers in a simple form - a briefing, dashboard, or something similar. It is intended to inform their decision-making processes by providing them with more objective information, which should hopefully minimise the need to satisfice.

Unsurprisingly, members of the Heuristics and Bias school of thought are very supportive of algorithmic decision tools; whereas they are usually distrusted by the Naturalistic Decision-Making community.

From satisficing to artificing

However, it is important to recognise that many algorithmic decision tools are being designed with the intention that they be used to supplement, rather than supplant, the expert judgment of professionals using the tools. For example, the developers of the Allegheny Family Screening Tool write:

“Although experts in [Child Protective Services] risk assessment now generally agree that actuarial tools are more effective in predicting risk of child maltreatment than clinical judgment alone, these tools cannot and should not replace sound clinical judgment during the assessment process.”

Thus, even with the introduction of algorithmic tools, humans are being encouraged to apply their expert intuition, meaning that - when used this way - algorithmic decision tools do not eliminate satisficing altogether.

I will call this process “artificing” - the form of satisficing which persists following the introduction of algorithmic decision tools.

From satisficing to artificing

From satisficing to artificing

The different ways algorithmic decision tools can be used

Since the introduction of algorithmic decision tools into the public sector, there has been considerable discussion and debate about the potential risks caused by AI's technical shortcomings, such as algorithmic bias and a lack of algorithmic transparency.

However, a discussion that seems to be attracting far less attention is, are the tools being used by public sector workers as they were designed to be used? How do humans feel about these tools and how do these emotions shape human-machine interactions?

Also - to what extent should we expect algorithmic tools to displace the biases and heuristics that characterise human decision-making?

I suspect (and will be conducting field research over the summer to test this assumption) that often the tools are not being used precisely as intended. In fact, there appear to be four ways that Molly could be using the algorithmic tool to support her decision-making: (1) she could be using the tool as intended - considering the advice provided by the algorithm and augmenting that with expert intuition; (2) she could be relying on the advice tool to guide her decision-making, but supplementing that process with bias and heuristics; (3) she could be ignoring the tool or; (4) she could be deferring to the tool entirely. I will explore each in turn.

1. Using the tool as intended - “expert artificing”

Example: Molly receives a call about a child who has been absent from school for an entire week with no explanation. The decision tool suggests that the child is low risk, but Molly calls the child’s mother and observes her to be wildly erratic over the phone. Reflecting on the thousands of conversations that she has had over her 20 years working with families, Molly can only think of a few instances where a parent has seemed so unstable. Molly feels that it is highly likely that the child’s parent is experiencing something like an acute onset psychosis. Molly relies on her professional judgment and decides to escalate the case against the advice of the algorithmic tool.

Here we see Molly drawing on her expert intuition to diagnose a parent who is likely experiencing acute psychosis, requiring immediate intervention. This is something that a statistical tool would be highly unlikely to pick up, given the rarity of that kind of presentation.

It is for precisely this reason that designers of the algorithmic tools encourage “expert artificing”. For this to happen, it is important that public sector organisations stress to workers using AI tools that their professional judgment and expertise is still very important. Workers should be explicitly encouraged to continue to exercise their skilled intuition as part of their decision-making process, even following the introduction of an AI tool.

2. Using the tool together with bias and heuristics - “biased artificing”

Example: Molly receives a call from a concerned neighbour about a child whose parents have been shouting a lot. The neighbour has also heard slamming doors and the sound of children crying on a fairly regular basis for the last three days. The tool suggests that the child is low risk; however, that morning, one of Molly’s colleagues told her a story about a child who died after reports about shouting parents were deemed to not require further investigation. On that basis, Molly progresses the case for a face-to-face assessment.

Here, we can see that Molly is making her decision based on emotion and “recall bias”, meaning that her assessment of the current case is strongly influenced by her recollection of a similar event.

As discussed above, it is well accepted (by all schools of thought) that bias and heuristics characterise human decision making. However, there appears to be very little - if any discussion - about how algorithmic tools are going to interact with that phenomenon.

Much of the literature on algorithmic decision tools appears to ignore the challenge of human bias in decision-making altogether; the implicit assumption being that if significantly more information is available through the form of an algorithmic tool, humans will rely on that to make more objective and analytical decisions.

However, I would suggest that while the decision tool might minimise the extent to which people rely on cognitive shortcuts, irrationality and bias are an inherent part of what it means to be human and will almost certainly continue to feature in people’s decision-making processes despite the introduction of decision tools - let’s call this “biased artificing”.

It is important that public sector organisations acknowledge that algorithmic decision tools are not a silver bullet which will somehow eliminate all irrationality from human decision-making. Being realistic about the limitations of the tools allow for other measures to be put in place to work alongside the tool to support the best possible decision-making.

What can be done to minimise biased artificing? Education about cognitive biases is critical. Providing workers with a simple guide is a great start because merely alerting people to the fact that biases exist has been shown to result in better decision making.

In addition, some simple suggestions for minimising bias include:

  • Asking colleagues to take a devil’s advocate position.
  • Using De Bono’s six thinking hats approach to support consideration of alternative viewpoints.
  • Using the Ladder of Inference to critically reflect on the thinking process that has underpinned the decision.

3. Relying entirely on the tool and failing to apply professional judgment - "algorithmic deference"

Example: Molly receives a call about a child who has been reported to have had consistent bruising on her arms and legs for the last three weeks. The algorithmic decision tool gives the child a low risk rating. Despite her misgivings, Molly defers to the advice of the tool and assesses the child as requiring no further action.

What Molly is doing here is demonstrating a form of “automation bias” in favouring the advice of the algorithmic tool despite her judgment suggesting that further investigation is probably warranted. This is a well-known phenomenon, which has been observed across a range of sectors including aviation, the health sector and in the military.

Some of the literature suggests that less experienced public sector workers are more likely to be subject to automation bias than those with more experience.

This is a problem because – for the reasons discussed above – public sector algorithmic decision tools are generally designed to be used in conjunction with professional judgment.

Here, the solution appears to lie in educating public sector workers about the tool and how to use it. Training around how to use the tool should focus on two things: firstly, it should make explicit the limitations of the tools so as to displace any sense of intimidation that public sector employees might feel; and secondly, it should stress the importance of workers continuing to apply their professional expertise to their decisions (as mentioned above).

4. Ignoring the tool - “algorithmic aversion”

Example: Molly does not read the briefing provided to her by the algorithmic decision tool. She resents that it has been introduced because she feels that the tool oversimplifies the complexity of child welfare cases. She also feels that the tool undermines her professionalism; she trusts her professional judgment, gained over 15 years working as a social worker, much more than she does a computer.

There is compelling evidence that decision-tools are often not used as intended and have little effect on professionals’ behaviour. For example, in his book "Sources of Power", Gary Klein tells the story about the US military spending millions of dollars in the 70s and 80s developing decision tools for commanders, only to abandon them because they weren’t being used.

If public sector workers are ignoring algorithmic tools because they resent them and don’t believe in their capacity to support better decision making, the tools will – for obvious reasons – not have any impact at all.

When people are ignoring the tool it’s likely because they feel alienated, angry and distrustful. As such, the solution here turns – to a large degree – on empowering the people who are using the tool and providing them with a sense of control.

One option is to include users of the tool in the design process. A great example of this is the Xantura Tool, where designers of the tool have been closely engaging with social workers to understand how its data analysis will be most useful to them.

Another solution is to give users of the tool a sense of control over the algorithm itself. It has been demonstrated that people are more likely to use algorithms if they can (even slightly) modify them. As such, offering public sector workers the opportunity to provide feedback if they feel they are seeing the algorithm err would likely engender greater trust and therefore use of the tool.

How to move everyone towards expert artificing

Above, I’ve outlined the different ways that algorithmic decision tools can be used.

The four approaches are not necessarily mutually exclusive. For example, Molly could be ignoring the tool and instead relying on both expert and irrational intuition to guide her decision. Nevertheless, this framework offers a neat way of thinking about how humans interact with AI tools. Moreover, the fact that categories blur has little impact on what steps should be taken to improve human-machine interactions.

Human-machine interaction - a framework for understanding different approaches

Clearly, public sector organisations want to be moving everyone towards using the tool in the way it was designed to be used – what I have called “expert artificing”. As it is unrealistic to be able to determine exactly how everyone is using the tool, and where they might be deviating from the intended approach, it is more pragmatic (and should not be overly onerous) to adopt a universal approach to AI implementation which will support people to use algorithmic decision tools well.

The solutions outlined above can be converted into a simple checklist for public sector organisations to consider when introducing algorithmic decision tools into their organisations to support people to use them to best effect:

AI checklist

Why we need to be thinking more about how people are interacting with algorithmic decision tools

In thinking about the efficacy of algorithms, we must think beyond the tech and think deeply about how humans are using these tools.

We cannot assume that humans will embrace algorithmic decision tools or use them as they are intended to be used. Public sector workers might feel intensely sceptical and resentful of the tools and therefore ignore them. They might feel intimidated, and therefore defer entirely to the tool. In addition, it is unlikely that the introduction of algorithmic decision tools will overcome the irrational biases and heuristics that humans rely on to guide their decision making in conditions of uncertainty.

Because of this, we need to approach the introduction of algorithmic decision tools into workplaces with a human-centred approach, thinking about how human minds work; about how humans feel; and about the machine-human interface and interaction.

The checklist above offers some initial ideas around how to do this; no doubt there are many more. Fundamentally, though, what is needed is for public sector organisations who are introducing algorithmic tools to support their workers to artifice well because – even with the best possible tool – if it’s not being used as intended, its utility and impact will be limited.

Author

Thea Snow

Thea Snow

Thea Snow

Senior Programme Manager, Government Innovation

Thea was a Senior Programme Manager in the Government Innovation team.

View profile