Skip to content

How should we change social media?

If we could start again from scratch, what sort of web would we create?

This is the question at the core of a new Nesta project called ‘Engineroom’ - part of the European Commission’s major Next Generation Internet (NGI) programme. The overall aim of NGI is to create a more ‘human-centric’ web, which better reflects European values, rather than, say, the commercial interests of US technology firms, or the Chinese vision of 'state sovereignty'. 

Over the next year or so, NGI will aim to hold a number of public conversations about what kind of digital spaces we want and what needs to change in order to create these. In this blog, I want to start that conversation with a focus on one particular aspect of the web: social media.

During the optimism of the Arab Spring, many saw social media as tools for liberation. However, a few years later, questions about whether social media may in fact threaten democracy, rather than being tools that promote it, are now becoming louder and more mainstream.

For example, there is evidence that social media are being used to radicalise jihadis and spread foreign propaganda. There is also a credible argument that, by feeding us more of what we like (read: fewer disagreeable opinions), platforms’ algorithms are reinforcing our biases, polarising debates and deepening social divisions.

Social media companies, however, face a dilemma. Many governments and individuals want restrictions on extremist material and 'fake news’. Yet many others want freedom of speech and object to censorship. In between, those who argue that free speech has limits can rarely agree on where we should draw the line. Whatever position the firms take, it is inevitably politicised.

Twitter’s testimony to the US Senate Committee at the end of last month illustrates this dilemma. The firm admitted to suppressing tweets that were potentially damaging to Hillary Clinton – including around half of all those using the #DNCLeak hashtag (relating to the 2016 Democratic National Committee email leak). However, it argued that this censorship was necessary due to many thousand Russian accounts which were potentially trying to influence the US election.

Twitter’s actions will likely satisfy few: the political right will see the testimony as confirmation of the firm’s institutional bias, and argue that Twitter suppressed the spread of information which was of legitimate public interest, however it got into the public domain; the political left will see the episode as confirmation of Russian interference and the need for further intervention in order to prevent subversion of democracy.

The only thing that all parties will agree on is that social media firms now have huge power. At a time when the majority of us obtain at least some of our news via social media, and online news content has already overtaken television as the primary source for many, the potential for manipulation is obvious and concerning.

The problems are exacerbated because the editing is murky and the context unclear. Most newspapers nail their political allegiance to the mast, so when we read one, we are usually aware of its slant and able (if often unwilling) to interpret stories accordingly. However, social media users are less aware that their feeds are ‘curated’ for them, and have fewer yardsticks by which to judge the content. Those who are aware that their feeds are manipulated have no visibility of the algorithms which do this, and so may impute the worst of motives.

Social media firms are already experimenting with better methods of flagging or rating content, to help users decide for themselves whether or not content originates from a reliable source. However, the familiar weaknesses exist: flags are subject to easy manipulation and the mass ‘review bombing’ which afflicts other platforms.

A better way forward may be to give users a clearer view of what they are not seeing – or, perhaps better, what other people are seeing that they are not – as well as explanations why this is the case. (For instance, Google previously allowed users to see the profile of their inferred interests that it algorithmically constructed from their viewing habits; similar pictures of what social media firms think we think, may help.)

Fundamentally, however, many of the problems exist within these companies because they are companies.

Social media firms are acutely aware that they are competing ever more intensely for our attention, and so want their sites to be addictive. Their algorithms are therefore designed and refined to show us what we want to see - hence fewer disagreeable opinions.

In addition, the reliance on advertising-based models has acted against openness. Sticking with Twitter (not because it’s especially bad, just topical), many third-parties originally built services on top of the platform. However, over time, financial pressures (e.g. not wishing to allow adverts to be filtered out) forced the firm to restrict their API with consequent impact on third parties and consumer choice.

Moreover, many of these platforms have another weakness: centralised control means centralised points of failure. This was illustrated by another event that came a couple of days after Twitter's testimony: a ‘rogue employee’ deleting Trump’s Twitter account. Inconveniences to tweet-happy presidents aside, centralised control means that the services are fundamentally more vulnerable to things like denial-of-service attacks.

By contrast, systems like email and RSS are not businesses in their own right, but are instead common protocols upon which multiple different yet interoperable services are built. Thus whilst individual service providers are often subject to attacks, it is almost impossible to shut down the whole system.

A further consequence of a common protocol is far greater consumer choice about which firms to entrust with our data, and about the trade-offs we each make.

For example, if don’t mind companies seeing your data and selling ads to you, you may be happy to use free services like Gmail or Yahoo; if you want more privacy you can pay a company like Hushmail or Protonmail; if you want additional features like tracking of mass campaigns, you can use services like Amazon SES. But whomever you choose, common standards ensure interoperability.

Other utilities such as telephony provide a further example: common standards and interoperability mean that, regardless of individual providers, anyone can call anyone else, leaving providers to compete on cost, coverage, customer service, or whatever they choose. This is clearly good for consumers.

So what would happen if Twitter were a protocol, not a company?

Widespread adoption might make it slightly harder for governments to shut down extremists (as they would have to monitor and liaise with multiple providers), but at the same time, those extremists would likely have a more limited reach (because, depending on other providers’ policies, their content may not be automatically propagated). Certainly, by reducing single failure points, it would add resilience to the system.

Providers who built systems using the protocol would also be free to devise their own business models. As with email, multiple models would arise: some might choose to be ad-supported (like Twitter); some subscription-based; some might be sponsored by media firms, universities or other organisations; yet others may develop models that we haven't yet imagined.

An open protocol would also resolve some of the issues that social media firms face: rather than having to decide on one editorial policy to suit all users, for instance, different providers could offer different options, or encourage third-parties to find creative solutions.

For instance, it might be possible to filter out trolls whilst still encouraging healthy debate by choosing to see 'disagreeable’ posts only from people who themselves regularly expose themselves to content with which they disagree. Other firms would undoubtedly develop a variety of other verification systems, and smarter ways of suggesting relevant content.

Importantly, a move to open protocols would shift the locus of debates over censorship and free speech back to where it belongs: in the public domain, for society as a whole to resolve, rather than the committees of social media firms.

This question of Twitter-as-a-protocol is not entirely new. In fact, there is a movement to create exactly such open micro-blogging protocols. One leading contender is OStatus (formerly OpenMicroBlogging), which forms the basis of open source microblogging platforms like Laconica.

It is too early to tell whether these efforts will succeed - certainly, it is difficult to see the current social media firms supporting actions which could diminish their existing oligopoly.

However, if we are to create a web which is based on European values as a whole, not just the interests and capabilities of (predominantly American) tech firms, then we need a wide-ranging conversation about how to make the web more open and transparent, including the role that open standards and protocols should play in this.

Please join that conversation, and tell us what you think should happen to help make the 'human-centred internet’ a reality.

Photo by freestocks.org on Unsplash

Author

Christopher Haley

Christopher Haley

Christopher Haley

Head of New Technology & Startup Research

Chris leads Nesta's research interests into how startups and new technologies can drive economic growth, and what this means for businesses, intermediaries and for the government.

View profile