Who’s responsible for building trust with personal data?
As the age-old saying goes, knowledge is power.
The continued relevance of this morsel of wisdom seems to underpin at least part of the growing unease at the amount of data that corporate - and especially internet - giants hold about us, their users. Is it healthy, many wonder, for so much personal data to be held by so few organisations? Should we mind that closed corporations have more information than democratically elected governments?
In order to make sense of this, the old saying needs updating.
If knowledge really were power then anyone with an internet connection and access to Wikipedia would be an oligarch or millionaire. Possessing information is not enough. It’s what you do with it that counts. In short, data is useful to the extent that it can be acted upon. The anxiety about the power some corporations could potentially wield is therefore based not merely on the fact that they have a lot of data, but how they use it.
This matters when we consider that for businesses to thrive over the long term, their customers need to trust them and their activities. Historically, people largely transacted with local businesses where there was some knowledge of the owner. Trust came through personal relationships. Clearly in the internet economy - where we can offer our data in exchange for services and products with organisations anywhere in the world - that is no longer possible.
There are technical solutions that might help address the trust gap, which I’ll discuss in future blogs. But for now, I think some solutions lie with individuals, businesses and governments each taking on some extra responsibilities.
For individuals who don’t want to be caught out by how their personal information is used, they might learn from the world of financial services. Consumers are constantly warned not to sign up to financial products - mortgages, loans, credit cards and so on - if they don’t understand the small print and the consequences of different future scenarios. The widespread failure to do this is widely held to have been one important factor in the financial and housing crash of 2008, when many US citizens overstretched themselves with mortgages they couldn’t possibly afford when house prices were no longer rising. Applying the same principle to the internet economy, one rule of thumb for consumers would be this: do not sign up for a service if you do not understand the business model behind it.
I was struck by this point when I was recently shown an app that helps users control their urge to compulsively check their phone. It entails letting a virtual tree grow taller based on how long the phone has been left idle. It’s a neat little idea. But when downloading the app it requests access to - among other things - the user’s location, photos, media and files, wifi information, bluetooth information, device ID and call information. On the face of it, none of those pieces of information is required for the function that most users would think the app was performing. Clearly something more is going on. My advice? Either dig into what the business model really is that requires all that information and decide if you’re happy with it, or leave well alone.
Encouraging people to behave as informed consumers may sound straightforward enough, but how realistic is it to expect individuals to personally investigate the business models of the services they use? The legal Ts & Cs we unthinkingly tick are well known to be inadequate for this task. This is where there is scope for a more proactive role for businesses. What if companies declared their business model(s) - i.e. how they make money for each service - openly on their websites, and especially when users sign up (or at any subsequent time when the business model changes)? I have in mind a short, plain-English paragraph explaining the revenue model and, if relevant, how users’ data plays a role in it. This could be done on a voluntary basis with a kitemark to promote good practice. Or there could be stronger approach where governments legislate to require it (potentially with a grace period for startups that have yet to land on their final model). Transparency is one powerful step on the route to trust.
Thinking ahead, there may soon be a greater role for government in promoting trust. If power derives from how data is used, then understanding that power will increasingly require digging into the function of algorithms. As online services get increasingly complex and driven by AI, the way services are provided will often be dictated by the decisions of algorithms. When governments use algorithms to shape and deliver public services, a strong case can be made that they should be made open source to allow for their public inspection. But this may be hard to replicate for the corporate world where the models are commercially sensitive. In that case governments could require that all companies that are subject to financial audits must also have an annual algorithmic audit to confirm that their algorithms do what the company declares them to do in their business model statement.
In it together
Whatever the exact behaviours and policy responses we might finally land upon, the key point is that our current model of blindly trusting organisations with our most personal information is unlikely to be sufficient or sustainable for long. If we are to recreate informed trust for the digital age, no-one can pass the buck. Individuals, businesses and governments must all take responsibility for their part of the solution.