Shadow of the smart machine: Algorithm guided decision making in the public sector
Given rising pressure from demographic change and shrinking finances; the public sector is having to look for new ways to support and manage demand.
In parallel, there is an on-going explosion in sources and volumes of data available (and a reduction in the technical cost to make sense of this data) to understand and predict future demand.
Consequently, there is a growing interest in using this data (and related Big Data technologies) to build algorithms that support more timely and accurate decision making – whether this is to support strategic commissioning decisions or the targeting of interventions.
Challenges to the wider use of algorithmic approaches tend to be based on;
- ethical considerations around whether algorithmic approaches could inadvertently embed operational bias
- a lack of confidence that accurate algorithms can be built,
- a related issue in terms of the impact of false positives and false negatives generated by algorithms,
- information governance issues related to the use of multi-agency data as inputs into algorithms
Ethical challenges are often premised on the assumption that algorithms will be implemented as an autonomous system. Although there are a few examples in the private sector (ie financial credit scoring systems) it is highly unlikely that this type of autonomous implementation would occur in the public sector.
More realistically, algorithms are used to support and guide human decision making processes – which in the public sector generally occur in a well-defined policy context and where provision of services is concerned, through clear eligibility criteria. If system bias exists then it would more likely be a result of policy or eligibility bias – areas where assurances and controls (should) already exist.
With respect to the quality of input data; there is significant variation in data quality across service areas - ranging from very high quality data in Benefits, Education systems etc. to more ad-hoc data in some commissioned provider organisations. Ultimately, the real test of whether this is a show stopper lies in a comparison between algorithm supported and unsupported business processes.
However, no algorithm (or human decision making process) is free from error, the challenge lies in building business processes that minimise the harm that results from error. In our experience the complexity of these challenges varies by business area. For example the implementation of algorithms to detect fraud present significantly fewer challenges than those designed to detect children at risk of harm.
These complexities are driven by a number of factors, including (for example);
- whether we are trying to detect historic undeclared behaviour or predict future events,
- consent based issues,
- ‘black box’ nature of algorithms,
- whether resulting interventions are punitive.
In the context of the public sector, all of the above issues feed into a general sense of wariness that often emerges as an Information Governance (data sharing) challenge. Unless carefully considered, this has the ability to limit the data available for algorithm development and consequently accuracy of models.
Given the pressures faced by the public sector, using algorithms to augment human decision making is essential so that scarce resources are better managed and targeted. This is obviously more problematic where algorithms are being developed to target punitive interventions in systems with a recognised bias, for example the policing system.
From a wider perspective tech giants will continue the drive to autonomous implementation of algorithms (e.g. driverless cars) – driven by perceived public demand but in the absence of an analysis of the wider positive and negative impacts on society.
This blog is part of the Shadow of the Smart Machine series, looking at issues in the ethics and regulation of machine learning technologies, particularly in government.