Collective intelligence design and effective, ethical policing
Collective intelligence design and effective, ethical policing
In this blog we draw on the insights of the growing field of Collective Intelligence to explore how law enforcement can change their approach to gathering and acting upon intelligence. Specifically, we examine how AI and Collective Intelligence could allow the police to become more effective both at a national and community level.
Police forces are under constant pressure to do more with less, and be more effective at catching criminals or preventing crime.
Following the recent annual Vincent Briscoe Security lecture at Imperial College London, given this year by the Commissioner of the London Metropolitan Police Service, we look at how law enforcement can change their approach to gathering and acting upon intelligence.
Evidence and Intelligence in Policing
There is a long history of innovations that aim to help police use data and knowledge more effectively. Perhaps more than other public services, police are highly attuned to questions of information and evidence, and recognise the importance of linking up multiple types of data.
The Evidence-Based Policing (EBP) is a good (relatively) recent example, which Nesta has had some involvement in through the Alliance for Useful Evidence and hosting the Society for Evidence Based Policing. The drive to more use of data, and more experiment, has had a big impact on policing practice, as well as throwing up surprising insights — like the finding that one of the best predictors of domestic homicide is a previous suicide attempt.
Although there have often been regional disparities in intelligence practice, there has been a strong push to better coordinate intelligence at a national level. In 1999, the National Criminal Intelligence Service (NCIS) created the ‘National Intelligence Model’ (NIM), based upon “collective wisdom and best practice” nationally and internationally, and more recently, Her Majesty's Inspectorate of Constabulary, in a 2015 report into police information management, explicitly stated that: “The whole picture may well be greater than the sum of its parts. This is why linking information and building the picture of the crime are so important … and why the consequences of failing to make the right links can have a significant adverse impact on the public”.
Artificial Intelligence in UK Policing
AI is now quite widely used in policing even if the reality is still well short of the Hollywood portrayals. For example, AI makes up part of South Wales Police facial recognition system; its face scanning technology cross-references against a database of 500,000 custody images in real-time, helps the police know if there are past offenders at big public events, and has already led to a number of arrests.
In Kent and Essex, the PredPol system was until recently adopted to predict where crimes may occur (it’s still in use in dozens of US cities). The system is trained using historic crime data and uses this to highlight areas where and when police officers may be needed. PredPol uses three data points; past type, place and time of crime, to create a unique algorithm based on criminal behaviour patterns. Significantly, although the PredPol system has faced numerous criticisms since being launched in 2012, with a civil rights coalition calling its algorithm ‘biased and flawed’, the Met has said it will try to develop its own system.
In both of the above examples, it is important to note that officers still apply their own professional judgement when acting on the intelligence, but the AI provides significant additional information and insight.
What is Collective Intelligence Design?
AI is now widely understood. Collective Intelligence is less well known but potentially as significant for the future of policing. Large investments in applying information and computer technologies to policing risk going awry if there isn’t a comparable engagement with collective intelligence.
CI refers to the ability of large groups — a community, region, city or nation — to think and act intelligently, aided by technology, in a way that amounts to more than the sum of their parts. At Nesta we are currently growing expertise in both the understanding and practice of collective intelligence in our new Centre for Collective Intelligence Design (CCID) — as part of this new research programme, we recently used machine learning and literature search to map and identify key trends and gaps in collective intelligence research.
There are now many examples of CI in practice — ranging from Wikipedia in information to Zooniverse in science; Duolingo and other business models; and interesting experiments like DARPA’s red balloon challenge.
Their essential promise is that by tapping into a ‘bigger mind’ — new sources of information and insight — the police could become smarter and more effective.
In our work we break down the tasks of intelligence to help think through how new methods could work:
- Observation — using not just sensors and cameras but also citizen input
- Analysis — using the new generation of offices of data analytics to combine policing data with other datasets, for example to spot problem buildings
- Creativity — using challenge prizes and open innovation methods to tap into new ideas from outside the police force
- Memory — using evidence and the syntheses of ‘What Works’
It quickly becomes clear that the police aren’t yet making use of many of the tools that could be being used:
- Medicine is now developing many platforms to curate, orchestrate and synthesise the insights of doctors, often going well beyond formal evidence, in ways that have little equivalent in policing
- Members of the public who have shown their usefulness, could be acting as voluntary sources of information or analysis. Again these methods are underdeveloped in policing as far as we know
- Hybrid models could be bring together data and intelligence at a local level to monitor crimes and to mobilise citizen action alongside police action
Safeland — Connecting communities to work together safely
An interesting example of this last point is Safeland; an app first used in Sweden that takes the principles and objectives of Neighbourhood Watch and delivers them through digital technology. Residents log incidents on the app and give descriptions of suspects and other relevant information to help the police in their investigations.
As well as offering a vision of the future, Safeland also throws up questions about data-sharing, privacy, and transparency. An interesting past example was the experiment to open up community CCTV to the public in Balsall Heath — which caused a big outcry because of misuse. There are also bound to be anxieties about any data sharing that involves private companies — as became clear when the data-sharing agreement between AI company, DeepMind, and the Royal Free London NHS Foundation Trust which criticised by the Information Commissioner because of lack of consent from 1.6 million patients.
But if the risks can be mitigated the sort of model proposed by Safeland, with a partnership between citizens and the police, points to an attractive future where communities are more empowered to understand and act on patterns of crime.
What are other considerations and threats?
Whilst AI and data ethics are receiving increasing attention and government backing, questions remain as to the practical details of how to increase data access while protecting privacy. So-called ‘data trusts’ offer one approach to stewarding data, which ensure that the public retains trust in how entities collect, maintain, and share data — but there’s still not a consensus of what a data trust would exactly look like in practice. Algorithms in Policing –Take ALGO-CARE™’ is another proposed decision-making framework for the deployment of algorithmic assessment tools in policing which has been developed in collaboration with Durham Constabulary, showing how ethical considerations, such as the public good and moral principles can be factored into the early deployment decision-making process.
Enforcing Britain's laws is ultimately still a human process; ambiguously written laws, the inconsistencies of judges, juries and police officer discretion are just some of the human elements involved in the UK's policing and Criminal Justice System.
But there can be little doubt that these practices will increasingly be enhanced by emerging technologies. These bring with them new challenges of regulation and ethics, that is bound to lead to new tasks for the College of Policing and Her Majesty's Inspectorate of Constabulary and Fire & Rescue Services (HMICFRS).
But our hunch is that the greatest value of these technologies will come from using them to make policing more not less human; better at harvesting information and insights from citizens; better at making policing practice visible; better, in short, at combining the best of artificial and collective intelligence.