Aral Balkan and Laura Kalbag: We're not sleepwalking into a dystopian future, we're there today

Aral Balkan is a cyborg rights activist, designer, and developer. Laura Kalbag is a designer from the UK, and the author of 'Accessibility For Everyone' from A Book Apart. Together, they make up the team behind Ind.ie, a two-person-and-one-husky, not-for-profit working for social justice in the digital age.

Aral and Laura

Credit: Kamila Schneltser

For Laura: The internet has a real diversity problem and because tech teams developing tools tend to be so homogeneous, issues being faced by less-represented groups often don't get taken into account. What are possible solutions that might help solve this problem (for example in the accessibility space)?

The obvious way to combat homogeneity in the groups building our technology is to make those groups more diverse and inclusive. The largely white, cis-gendered, straight, non-disabled, and wealthy groups of men building technology today build it to suit themselves. Suiting themselves ranges from satisfying their own needs to exploiting the needs of others. If we have diverse teams, we are more likely to build technology that satisfies the needs of a diverse audience. We are also more likely to have people on our team that understand how our technology might pose a barrier or threat to somebody from a similar background, and put a stop to it. For example, including a blind person on your team will likely ensure that you build inclusive technology that is accessible to people who use assistive technology.

However diversity does not guarantee ethical technology. It’s important that people are not hired to teams in order to create the appearance of diversity as an exercise in public relations. People from marginalised backgrounds must have real power to enact or influence decision-making. A person hired in to write marketing copy is unlikely to have an opportunity to change the organisation’s business model.

A key part of embracing inclusion and diversity is understanding that there are many diverse traits that comprise us as individuals. We mustn’t be naïve and expect one person to be reflective of an entire group. One person with cerebral palsy is not going to know how to build effective technology for a deaf person just because they both identify as disabled.

And we must be realistic about the fact that people from marginalised groups will not always behave in the interests of those groups. People in inclusive teams are, of course, capable of building unethical technology. However, unethical technology is by its nature not inclusive. This is because marginalised groups are more vulnerable to unethical exploitation as they have less power, and fewer resources, with which to defend themselves.

Finally, we must also stop expecting representatives of marginalised groups to be the only ones promoting inclusion and diversity. Or only allow people from marginalised groups to speak about "marginalised people’s issues". Teams must be genuinely inclusive, where people from marginalised groups are listened to, and have their opinions and ideas treated with respect in all areas. This means that people with more privilege must learn to be quieter, and work hard to amplify the voices of those with less privilege without expecting reward.

For Aral: The title of your talk is "Building the People's Internet" - how can we empower citizens to take charge of building the internet they want to see?

We can empower citizens to take charge of building the internet they want to see by funding them and supporting them instead of funding and supporting surveillance capitalism like we’re doing in the EU today. It’s that simple. Right now, we’re funding the wrong people.

Both:
Can you tell us a little bit more about what Ind.ie does?

Laura Kalbag

At Ind.ie, we work towards two interrelated goals: to reduce the harm caused by surveillance capitalists like Google and Facebook, and to create ethical alternatives.

As part of our efforts to reduce harm, we work to raise awareness and advocate for effective legal regulations. We also encourage developers to take responsibility. We can clean up the web, one site at a time, by removing surveillance devices such as Google Analytics, Facebook Like buttons, and YouTube videos from our sites. And, finally, we help people protect themselves on the web with Better Blocker, our web tracker blocker.

Most of our time today is spent working on Indienet, our ethical alternative. On the Indienet, you own and control your own website and your own domain. You can talk to other people (and the world) to share your thoughts, photos, videos, and whatever else while deciding what you want to make public and what you want to keep to yourself. In other words, with complete privacy, control, and ownership.

In more technical terms, Indienet is a free and open, decentralised/federated, personal website platform.

Is the internet fundamentally broken? If it is, where did we go wrong?

The internet is not fundamentally broken, but the web is. The web was created with the best of intentions but its design did not match its philosophy. Tim Berners-Lee wanted a decentralised web but he designed a centralised web. The centralised aspect of the web is inherent in its client/server architecture where many clients connect to a relatively small number of servers. The servers are the centres.

At the beginning of the web, there were many centres in relation to the size of the network (as it was mainly non-commercial and academic institutions). It all went wrong when capitalism entered the picture, with venture capital in tow. When you put a client/server system into an enzymatic pool of capitalism and venture capital, it incentivises those centres to grow. And so they did, and became our new mainframes: the Googles and the Facebooks of the web. We can think of the web as the Mainframe 2.0 era. The only difference between the Mainframe 1.0 and the Mainframe 2.0 era is that our mainframes today are global in scope, and their centralised nature and surveillance-based business models threaten the very foundations of our human rights and democracy.

A lot of talk about the future of the internet tends to be very negative/focus on dystopian outcomes. Should we focus more on imagining more positive visions, and what could those futures look like?

Surveillance capitalism – the social system we inhabit – is based on a feedback loop between surveillance (the accumulation of information) and capitalism (the accumulation of wealth). It is the present reality of the internet, not its future. We are not sleepwalking into a dystopian future, we are there today. It is essential that we are able to accurately describe this status quo and agree that it is problematic. Otherwise, what would justify spending time and money in researching, investing in, and building ethical alternatives?

In order to make any meaningful progress, we must at least agree that the Googles and the Facebooks of the world are not forces for good. On the contrary, they are threats to our human rights and democracy. We must at least agree that they are not our partners, sponsors, and friends, but our adversaries. Even today, we are not at this point. Even today, institutions that purport to advocate for human rights and democracy feature Google and Facebook as partners and sponsors. The very groups that should be protecting our rights and advocating for investment in ethical alternatives are instead whitewashing Silicon Valley surveillance capitalists under the banner of sponsorships, public-private partnerships, and “multistakeholderism”. They do for Big Data what the doctors in the cigarette ads did for Big Tobacco. While a healthcare conference wouldn’t dream of being sponsored by Philip Morris, conferences that purport to be about human rights and privacy don’t think twice about being sponsored by Google and Facebook.

If we can’t even agree on the basic point that these corporations are a threat, what hope is there?

Despite this, we must also focus on positive visions. First, we must understand that effective progressive change means systemic change. You cannot fix surveillance capitalism simply by fixing surveillance, you must fix capitalism also. Such solutions will not come from a naïve expectation that surveillance capitalists will reform themselves, and act contrary to their business models. Nor will the solutions come from existing financial models, like venture capital based “startups.” Startups are disposable businesses designed to either fail fast or grow exponentially in a cancer-like fashion, until they become billion-dollar unicorns. They are then either sold to the public via an IPO, or to a bigger surveillance capitalist like Google or Facebook to become a cog in their surveillance machine.

True, meaningful change will come when we realise that the market will not fix the problem. If we want technological infrastructure to benefit the commons, we must fund it from the commons. This is not to say that infrastructure must be built, owned and controlled by governments, or that we should nationalise Google or Facebook, as the only thing worse than a corporate surveillance machine is a government surveillance machine.

Instead, we should take what works from Silicon Valley (small teams, working in an agile manner and iterating on solutions while competing with each other), and remove the toxic elements (the equity capital and “exits”, and the surveillance-based business model). Instead of sponsoring multi-million Euro projects that have no hope of competing with Silicon Valley, we should fund many entrepreneurial organisations whose criteria of success isn’t the exit, but instead the creation of sustainable businesses in the common interest. Let’s call them "stayups" to contrast them with Silicon Valley’s disposable and surveillance-based "startups".

A lot of the really interesting initiatives trying to build more ethical tech/solutions have excellent ideas but find it hard to go mainstream. Finding new sustainable business models and funding remains a large issue, but what else do you think is necessary to get the general public to switch to non-surveillance capitalist tools?

Aral Balkan

The key to making ethical technology that can be used in the mainstream is to build useful and convenient experiences that happen to also respect our human rights. As the designers and builders of ethical technology, we must return to first principles and consider what a person is trying to achieve with our technology; is it sharing a photo with a friend? Is it publicly-broadcasting an interesting thought? We must build technology that enables people to accomplish these goals, while architecting the technology to respect their privacy and security.

We must also consider our definition of success within the mainstream. Today, a startup surviving on surveillance capitalism uses the network effect to expand to huge audiences in order to profit from their personal information. The more signups startups have, the more information they can derive from relationships between those individuals, and the group as a whole. But for those of us building ethical technology, success does not need to be at such scale. When a person wants to share a photo with somebody they know, how many people do they need in their network? We don’t need to be able to send that photo to everybody in the world, we usually just communicate within our own small networks; they could be friends and family, or local and interest-based communities.

As Europeans we increasingly find ourselves caught in the cross-fire between the dominant Chinese (government-led) and American (big tech-led) internet narratives. Could Europe lead the way in developing a third, citizen-led alternative? What would it need to do?

Europe can lead the way in developing a third solution. However, to do so, the European Commission and EU governments must first stop worshipping at the altar of Silicon Valley. Instead of appointing ambassadors to Silicon Valley, like Denmark did, we must start effectively regulating Silicon Valley. We must learn to recognise Silicon Valley lobbying even when it shrouds itself under a futuristic-sounding educational institution like Singularity University (which isn’t a university).

We must also start to fund ethical, decentralised, free and open alternatives from the commons for the common good. We must ensure that these organisations have social missions ingrained in their very existence that cannot be circumvented. We must make sure that these organisations cannot be bought by surveillance capitalists. Today, we are funding startups and acting as an unofficial and unpaid research and development arm for Silicon Valley. We fund startups and, if they’re successful, they get bought by the Googles and Facebooks. If they’re unsuccessful, the EU taxpayer foots the bill. It’s time for the European Commission and the EU to stop being useful idiots for Silicon Valley, and for us to fund and support our own ethical technological infrastructure.

Given how much information Silicon Valley, and thus the US government, has on EU citizens, this is not just a matter of our human rights and the future of our democracy, but also the national security of European countries and that of the EU. This isn’t to say that we must wall ourselves off or create a European silo. On the contrary, we must ensure that the technological infrastructure we fund and build is free and open, decentralised, and interoperable so that anyone, anywhere in the world can use it.

Laura and Aral will be speaking at FutureFest 2018 on 'Building the People's Internet' and 'Who is the Internet for?' Join them on Saturday 7 July at Tobacco Dock. Get your tickets.

Author

Lily Fish

Lily Fish

Lily Fish

Event Marketing Manger

Lily was the Event Marketing Manager in the Communications team. She worked on the marketing for FutureFest and explored how we can grow the event in 2018.

View profile