In a world where everyone and everything is being connected up, it's going to become more important that we find new ways of making sense of the vast amounts of data we're generating.
There are two significant challenges to doing this well. The first is making data teams part of your organisation's setup and the second is opening up more public and commercial datasets so we can all benefit.
Data visualisation is a growing field of expertise. It combines the skills of the designer with those of the developer and the journalist, to create interactive layouts that help people explore data for themselves.
But nowadays forward thinking organisations are creating multi-skilled teams to bring data visualisation in house and develop their own tools for telling stories with data.
News organisations have always been good at translating complex events into concise stories and communicating them visually. It's no surprise then that some of the first data visualisation teams have flourished in that environment.
The Guardian and the New York Times both have large data viz teams working closely with journalists and researchers to develop compelling visualisations. My favourite examples are the classic government spending bubble map from the Guardian and the simple, yet powerful, dotmap charting New York Times readers' approval rating of Barack Obama when news of Bin Laden's death was announced.
But it's not just news organisations who are doing this. Public bodies have started seeing the benefit of sharing back the data that they collect. The Office for National Statistics has formed a new data viz unit, led by Alan Smith, whose main job is to turn the huge wealth of data the ONS sits on into usable and compelling visualisations. They've spun up their own simple, responsive site to give them more flexibility in what they can publish.
Over the next year we're going to see the role of the Data Visualiser becoming more mainstream - both as a jack-of-all-trades, master-of-some role for smaller organisations, and as teams in their own right for larger and more ambitious organisations.
For these teams to work well, they're going to need the new renaissance skills mix of designer, developer and journalist.
It's not just getting the right people with the right skills and giving them permission to work. Data visualisation teams will need access to more than their own organisation's data to generate value. But opening up data is a key challenge.
Without a proper governance structure for the UK's public data, it's going to remain a piecemeal and painful job to clean and correlate datasets from multiple sources.
In 2002 the Danish government released their postcode file for the entire country. They estimate that the value created by this single action is a 30:1 ratio of direct financial benefits to costs.
The UK is still to release its Postcode Address File (PAF) - the official list of all UK addresses. As Nigel Shadbolt, Chairman of the Open Data Institute points out, PAF is currently expensive to use, inflexibly provided and licensed in a complex way. If they were to release it, argues Shadbolt, it would increase usage by factors of between 10 and 100.
Organisations like the Open Data Institute are lobbying hard for our government to put in place a UK-wide data strategy to help us realise the immense value that opening up and standardising our public datasets could bring. While some of these gains might not be immediate, they are on the roadmap.
What is further away, and more difficult to negotiate, is the huge amount of data stored and owned by global proprietary systems that make money from our online behaviour and restrict how much access we have to it.
Mobile phone data, social data, search data - all of this wealth is kept by a few rich technology companies that make a huge profit from what we do in this connected world. In return they provide us with 'free' services, from maps and apps to email and business collaboration tools. But is the trade-off worth it?
These free services are designed to keep your data locked in the system where the value generated trickles back to the top.
Vitalik Buterin, speaking at Nesta's FutureFest, highlighted our predicament when he said "it's more difficult to change your digital identity in 2015 than it is to change your country of residence".
The writer Cory Doctorow has gone even further. In his eyes we have become digital serfs in a new medieval world. We rent our plot of online land from the feudal IT barons of Google, Facebook and Baidu.
It is a state of affairs that will have its reckoning in the next decade, as a new generation start to demand more privacy and control over their data. Eventually, those Privacy Agreements we all agree to without reading will need to be read to the end and then renegotiated.
As more public datasets are made accessible, and the governance around them improves, data visualisation will become part of our everyday experience. Council spending, local surgery times, school reports, traffic updates - it will all be presented to us visually so we can understand it more clearly and explore it ourselves.
This will be an essential tool for government, public bodies, charities and non-profits. The data will enable them to communicate their work better and will help them provide an improved service for their customers - us.
What won't happen so fast, are the tech giants giving up their data for public good. For them it really is the new oil* - and until we stop feeding the wells, its value is only set to grow.
*The oil analogy has it own history. Clive Humby first said "data is the new oil" back in 2006 - a nod to the wealth it was creating for the people who could refine it. David McCandless rephrased it as "data is the new soil" - out of which hundreds of thousands of new ideas can grow. Nigel Shadbolt still takes issue with the oil analogy. "Data isn't some fossil fuel. It's much more powerful than that".