While the idea of decentralisation is not new, it is being given fresh impetus and new possibilities by innovations in technological decentralisation. The physical internet and the World Wide Web which runs on top of it were both originally conceived as a decentralised ecosystem where users connected directly with one another and where no single organisation had ownership or control. As the Web’s inventor put it, it was intended to be a democratic ‘place where we can all meet and read and write’. This lack of a centralised authority made the web resilient and democratic, ripe for ‘permissionless’ innovation. Interoperability depended on common standards, but these were agreed by consensus rather than imposed by authority. Many early pioneers in the 1970s and 1980s were motivated by a utopian vision of the internet as being detached from traditional structures, with the potential to democratise knowledge and power.

However, this vision didn’t last long: within the last few decades, the internet has undergone significant centralisation, with most information now flowing through just a handful of tech corporations. Whilst such centralisation has had some positive aspects, such as making the internet more user-friendly, it has come at a significant cost.

  • First, centralisation has placed unaccountable organisations in powerful positions as information gatekeepers. The majority of people now access their news and other information through a small number of web platforms. This puts the companies that run these platforms in the position of gatekeeper or even censor, with the ability to control what people hear, read and watch. While this may improve relevance or quality of content, it also gives firms the power to make certain pieces of information effectively invisible to the world. Even if not intentionally malevolent, this gives these organisations unparalleled control over us and our democracies.
  • Second, centralisation threatens privacy. Because so much flows through relatively few channels, big tech firms possess vast amounts of information about us and our private lives. Moreover, since the business models of many of these firms are based on their ability to collect user data and sell it to third parties, there are strong incentives for them to aggregate and interlink such data. Recent abuses, like the Cambridge Analytica scandal in which Facebook data was illegally harvested to build psychographic profiles of potential voters, have increased calls for users to have more control over their personal data. There have also been multiple instances of employees of centralised systems abusing their position to access private content.
  • Third, centralised systems create fragility and single points of failure. For example, by centralising records in one database, Equifax made itself an attractive target for hackers; the data breach of their system in 2017 exposed the personal data of up to 143 million people. In the same year, a typo by one of AWS’s (Amazon Web Service – Amazon’s web hosting service) engineers created an outage which brought several other large web services down with them.
  • Fourth, centralised platforms do not equitably distribute the value captured among those that create them. It has often been argued that the free use of these web platforms does not come close to fairly compensating us for the value of the data and content created by users and that users – not just shareholders and executives – should be financially rewarded for their contributions.

Alongside these issues, the ‘winner takes all’ dynamic of the centralised web – which is reinforced by network effects and the costs associated with migrating to a different provider (e.g. losing all your personal information held by the incumbent) – prevents small new firms from getting a foothold, thus limiting competition, consumer choice and innovation. While critics might argue that decentralised platforms would monopolise in the same way, this may not be the case; your information would be held in a decentralised, open-source database, making it easier to switch to an alternative provider if they offered a more attractive service or if your current provider did something you did not like.

For these reasons, there is a large movement of people supporting the ‘re-decentralisation’ of the internet. This movement, which includes the Web’s inventor, Sir Tim Berners-Lee, envisages an internet that, once again, is not reliant on centralised operators or intermediaries; where users own and control their own data and interact directly with one another free from surveillance or censorship while still having access to the same breadth and quality of services.

Peer-to-peer (or ‘P2P’) file-sharing services, such as Napster, LimeWire and BitTorrent, have been popular since the late 1990s. These allow people to download data directly from people who already have the file, rather than from a single centralised server. Participants in the network typically act as both suppliers and providers of resources so that once a file has been downloaded by a user, the user’s computer then hosts it for others to access. The fact that there is no central server makes the system resistant to censorship, which is why such systems have been used to distribute pirate movies, music and software.

The Decentralised Web (DWeb) takes the idea of peer-to-peer connectivity and applies it to websites and web applications too. There are two key ways in which the DWeb differs from the traditional web. First, as with other peer-to-peer services, it typically requires all computers to provide services as well as access them. Second, to navigate this distributed network, it uses a different address system to the traditional Web: whereas we currently find information by specifying a particular web address or URL, the decentralised web stores information based on its content – i.e. it is found by what it is rather than where it is.

As an analogy, finding information on the traditional web can be likened to directing someone to a book by saying that it is ‘in the British Library, in a specific reading room, third bookcase, top shelf, first from the left’; whereas with the distributed web you would tell them how to find it by giving them the title and author, so they can find it in any library or bookshop or even borrow it from a friend. This means that information can be stored in multiple places at once and passed around from computer to computer rather than relying on a single server, which makes the system more resilient.

Distributed computing is the use of multiple connected computers which work together to solve a particular problem or run a program. It is a loose concept which has been in use since the 1970s. The latest incarnations use networks like Ethereum to provide a decentralised ‘virtual machine’.

Distributed applications are computer applications which run on distributed computing systems. Such programs are being developed for many of the common services found on the traditional web, from web browsing and file storage to video streaming and social media.

Authors

Jonathan Bone

Jonathan Bone

Jonathan Bone

Mission Manager, healthy life mission

Jonathan works within Nesta Cymru (Wales), focusing on working across public, private and non-profit sectors to deliver innovative solutions that tackle obesity and loneliness in Wales.

View profile
Christopher Haley

Christopher Haley

Christopher Haley

Head of New Technology & Startup Research

Chris led Nesta's research interests into how startups and new technologies can drive economic growth, and what this means for businesses, intermediaries and for the government.

View profile