My recent commentary argues that public institutions like the BBC must actively build democratic digital alternatives to marketised tech platforms. This blog applies that exact philosophy to the urgent debate around children's technology use - making the case for public intervention that builds safe, enriching digital spaces for young people.
The UK government's consultation on children's use of technology arrives at a moment of genuine public alarm. Parents are frightened. Teachers are exhausted. The scientific evidence linking social media use to depression, anxiety, body dysmorphia and loneliness grows stronger by the month. When Elon Musk calls a European prime minister a "tyrant" for proposing age restrictions, or when bereaved families describe "the most unimaginable tragedy" resulting from what their children experienced online, the case for drastic action feels overwhelming. No wonder Australia has banned social media for under-16s. No wonder France, Spain, Greece, the Netherlands and Denmark are rushing to follow.
The instinct to protect children by keeping them off these platforms entirely is not just understandable - it seems urgent. Many of us have experienced the milder harms directly: the rage clicks, the doom-scrolling, the corrosive polarisation and now the unsettling flattery of AI companions. For children, the stakes are much higher. The platforms were designed to be addictive. Their business model depends on it. Why would we allow our children anywhere near them?
However, we must reckon with two uncomfortable truths. First, social media offers genuine benefits that many children - including the most vulnerable - cannot easily find elsewhere. Second, bans carry unintended consequences that are also harmful.
Consider the gay teenager in a deeply conservative community, the autistic child who struggles with face-to-face interaction, the young person with a rare medical condition seeking others who understand. For these children, online communities provide something irreplaceable: connection, validation and the discovery that they are not alone. And these are the extreme cases - at its best, social media is a life-enhancing aspect of humanity, even for children, because it can be a medium for that most precious of things - human connection. A bored child who finds inspiration and social validation by participating in a game can have their lives transformed for the better. And children who learn through good online environments how to be good digital citizens will be better equipped for their futures of work and leisure. A blanket ban doesn't distinguish between a child being groomed on Instagram and a child finding life-enhancing peer support on a moderated forum. It treats all online social connection as equally toxic.
Then there is the cliff-edge problem. A ban until age 16 creates a cohort of young people who reach that birthday with no experience navigating online social spaces - who are then released into the unregulated wild of adult platforms. They will lack the digital literacy, the scepticism, the resilience that comes from supervised practice. The most addictive products and predatory actors will be waiting for them. Children need opportunities to build resilience in cyberspace and IRL and to develop social skills in a protected environment. Denying them this until 16, then expecting them to cope alone, is a recipe for harm.
Moreover, a ban is likely to be imperfect in execution. Past attempts to restrict minors' access to pornography were to some degree circumvented with VPNs. Young users in Australia are already switching to platforms not yet covered by the ban or fooling age-verification systems. Even if bans only partially work, some argue, they are still worth doing - like seatbelt mandates that save lives even when not universally followed. But unlike seatbelts, which have no significant downside, a partially effective social media ban may simply push some children toward less regulated spaces while depriving them of beneficial connections.
The fundamental question is not whether we should act - we must - but whether prohibition is the only, or even the best, response.
Before we accept that children and social media are fundamentally incompatible, we should analyse the history of PopJam, a platform that demonstrated something remarkable: that safe, engaging social media for children is technically and operationally achievable.
PopJam was, in essence, a kid-appropriate Tumblr. At its peak, it reached 10 million under-12 users and had £4 million in annual revenue. It was fully compliant with GDPR and the US Children's Online Privacy Protection Act (COPPR), completely anonymous, and operated as a ’walled garden’ with no external links. Children could create profiles without adult sign-off because the privacy protections were so robust. Every user-sourced image post was human-moderated before going live, maintaining 99.999% quality standards. There was no private chat functionality, drastically reducing grooming risks.
The platform succeeded by giving younger children (7-12 year-olds) content that "looked cool and older" in "the safest, most diluted way possible" according to Craig Donaghy, who was Head of Child Guarding on the platform. It offered daily creative challenges, awards and badges, and curated channels for hobbies - all appropriately moderated. The community values emphasised creativity and kindness. Organic friendship groups formed, often providing space for children who weren't socially confident in real life to thrive online. Major brands including the BBC, LEGO and Disney maintained channels, with the BBC uniquely self-publishing content and devoting significant resources to community management.
PopJam proved that with the right design and moderation, social media could be a net positive for children. It filled what Donaghy describes as "a huge hole" in safe online spaces where children can be social in age-appropriate ways, build necessary skills and experience community without commercial exploitation.
If PopJam worked so well, why doesn't it exist today?
The answer is brutally simple: human moderation was its biggest challenge, according to Scarlett Cayford, PopJam’s Business Lead. It was the largest cost and couldn't scale in the same way that fixed-cost-only platforms scale. Growing the platform meant scaling moderators proportionally with users, creating margins that did not get better with scale, as is usually expected by platform shareholders. After acquisition by Epic Games, PopJam couldn't meet expectations for the high-margin profitability of games. There was no clear path to those sorts of margins at scale, so it was shut down.
This is the critical insight: PopJam failed not because the concept was flawed, but because shareholder incentives in social media are fundamentally incompatible with child safety. The intensive human moderation required for true protection cannot be sustained with the expected margins through advertising or typical commercial models. The ‘logic of enshittification’ that Cory Doctorow describes - where platforms attract users with quality, then degrade that quality to extract profit - is not a bug of commercial social media. It is the feature. Shareholders will eventually demand that network effects be exploited for maximum return.
PopJam demonstrates that the barrier to safe social media for children is the choice of single-minded profit maximisation over modest sustainability. Moreover, ‘modest sustainability’, in an age in which AI reduces moderation costs a great deal, seems entirely achievable. PopJam had a path to commercial sustainability, but did not fit into the very high-margin, game-focused strategy of its eventual acquirer. This makes it an ideal case study for why public support - whether through a public broadcaster or similar public institutions - is necessary to provide good social media for children.
The market failures in platform markets are profound. Social media platforms have high fixed costs, making scale a commercial advantage and oligopoly inevitable. Network externalities - where users go where other users are - mean only a small number of products get tested in the laissez-faire market. Once established, platforms become difficult to exit, allowing quality degradation in favour of shareholder returns.
These dynamics explain why the commercial market will not spontaneously produce good social media for children. The profit-maximising algorithm will always tend toward addictiveness, toward exploiting vulnerabilities, toward keeping eyes on screens regardless of cost to wellbeing. Regulatory correction - fining platforms for excess addiction, for example - faces the impossible task of specifying in advance every harmful feature in an environment of constant innovation.
What we need is not correction of commercial platforms but construction of public-good alternatives. This is not unprecedented. In the 1920s and 1930s, US radio, funded by advertising, gave massive audiences to Father Coughlin's pro-Nazi antisemitism and other extremist voices. Meanwhile, the BBC's public service model, established by Royal Charter in 1926, was designed to ensure broadcasting would "never be used to cater for groups of listeners, however large, who press for trite and commonplace performances." John Reith's mission was to "bring into the greatest possible number of homes all that is best in every department of human knowledge, endeavour and achievement."
The contrast was not technological but philosophical: different conceptions of the public sphere produced radically different outcomes. The question for our era is whether we will allow the digital public sphere to be shaped entirely by commercial imperatives, or whether we will assert democratic control over the spaces where our children learn to be citizens.
The government's consultation rightly seeks views on age limits, verification methods and restrictions on addictive features. These are important. But we should go further and ask: can we build social media that is genuinely good for children, rather than simply less harmful?
I've written a companion paper on Digital Public Sphere Innovation which seeks to answer that question. The UK has the assets to attempt a braver response than leaving it all to a dysfunctional market. The BBC is still capable of capturing national attention and even in our fragmented information environment, still has significant reach. There is an ecosystem of social good organisations, including the one to which I belong, Nesta, which are well placed to test and design new approaches. Organisations like the Alan Turing Institute would be strong candidates to support in both AI and algorithmic design.
Together, these and other institutions could launch a programme of experimentation - creating and testing platforms with different moderation approaches, algorithmic feeds, identity systems, and funding models.
Some experiments might build on PopJam's success: Reddit-style communities with robust moderation tools and cross-community reputation scores; TikTok-style short video platforms with algorithms tuned for creativity rather than addiction; even AI companions with automated oversight to reduce psychological risks. Others might pioneer attention-auction systems where citizens, when choosing to act as consumers, directly monetise their attention rather than surrendering it to intermediaries.
The goal would not be to create a monolithic ‘BBC Facebook’ but to establish what good social media looks like - and in doing so, shape the market's offerings through impact on demand and expectations, just as the BBC shaped broadcasting in the twentieth century.
The choice before us is not between banning social media and accepting its harms. There is a third path: building better. PopJam proved it can be done. The question is whether we have the collective will to make it sustainable - and the wisdom to recognise that some things are too important to leave to the market alone.
The consultation asks what further action the UK government should take. The answer should include not just restriction, but construction. Not just keeping children off bad platforms, but creating good ones. Not just prohibition and incentive regulation, but provision of what is best.
A detailed proposal for how this might be achieved is set out in the accompanying commentary: How to fix the 'enshittification' of our digital public sphere.