Yesterday I helped launch the government’s data ethics framework at Nesta, alongside Cabinet Office minister Matthew Hancock. Here’s what I said.
Yesterday I helped launch the government’s Data Science Ethical Framework at Nesta, alongside Cabinet Office minister Matthew Hancock. Here’s what I said.
Almost everything we do now is captured as data - where we go, what we buy, who we talk to, and data is very much part of daily work at Nesta. We are involved heavily as an investor in some of the most promising early stage companies developing algorithms, such as Featurespace or Cogbooks; as a shaper of new kinds of statistics, for example combining open, commercial data and web-scraping to understand the economy in Tech Nation 2016; we’ve promoted data for social progress; helped use open data to solve public challenges; and are active in experiments right on the forefront of ethics, such as Dementia Citizens.
All of that makes us supportive of anything that will make it easier to run useful, imaginative experiments in data, and supportive of anything that makes it more likely that data can serve the public interest.
The UK has a remarkable history in this space. It pioneered many initiatives that we would now describe as ‘data first’, gathering data to discover patterns rather than waiting for theories. The UK parliament’s Blue Books collected vast quantities of data. Figures like William Playfair (inventor of the line graph, bar and pie charts) made data accessible as well as insightful. William Petty invented much of modern economics through creating new ways of orchestrating data on income and assets. John Snow invented crucial parts of modern public health through his analysis of cholera in mid-19th century, alongside seminal figures like Florence Nightingale.
It’s fascinating that three centuries ago this field was called ‘political arithmetic’ (or, to be more precise, ‘political arithmetick’). The heart of the issue today is that data is inevitably political, with a small p, as well as technical. It creates, shifts and reshapes power. And it inevitably touches on acutely ethical issues.
The dramatically more powerful tools now available for gathering and analysing make this even more obvious. Data becomes most valuable when connected and combined. But data also potentially becomes most dangerous when it’s connected and combined. That’s why, from a policy perspective, two routes have to be pursued in close tandem. On the one hand, we have to make it easier to combine data of all kinds so as to discover the greatest use of values, by spotting patterns, so as to better predict and analyse. On the other hand, we also have to put in place stronger protections against abuse and cultivate a constant dialogue to ensure that we reap the greatest potential benefits with the fewest possible harms.
That’s not easy. The public are right to distrust big institutions that have in the past been careless with their data: civil servants leaving computer discs in car parks or private companies losing vast databases. It’s also not easy because the data science field itself tends to be more fascinated with means than with ends, and has often failed to explain what problem it is trying to solve, or what need is being met.
The good news is that although there are plenty of gaps in public understanding and knowledge, citizens are highly pragmatic, willing to accept trade-offs, and to share data if they can see a benefit, even with the most sensitive personal data.
But that pragmatism is highly context specific. We can try to define general principles and then deduce specific applications. But in practice the discussions have to be around particular cases in particular places and times.
Government has no choice but to be part of the conversation. Private organisations can and should attempt to embed ethics and engage the public. But ultimately the rules will have to involve accountable public power, and it’s neither fair nor realistic to expect private firms to solve problems which are by their nature public.
That’s why a few months ago I proposed the creation of a Machine Intelligence Commission, learning from lessons in other fields, such as human fertilisation, to help navigate a route through the potential minefields of powerful new algorithms and data uses.
When they work well, commissions of this kind make it easier to experiment and innovate, striking a proportionate balance between freedoms and restraints. They participate actively in public dialogue and help people think through what’s possible and desirable. That’s going to be vital in relation to data and machine intelligence in fields like health, transport or security. By contrast, if governments ignore the question of data ethics, then it’s fairly guaranteed that periodic public backlashes will make it much harder to reap the full benefits of new tools.
Knowledge is power. It’s never wholly neutral. That’s why it’s right that scrutiny, argument and, yes, democracy, play their part in the next evolutions of data and machine intelligence.