
The UK’s Online Safety Act’s Predictable Consequences Are a Cautionary Tale for the US
The UK has begun implementing the Online Safety Act, a new law regulating online services. Some of the law’s provisions, such as age verification requirements, mirror what some U.S. policymakers have called for to protect children from accessing potentially harmful content. But many in the UK are now waking up to the realization that the law has many unintended consequences—flaws that should serve as a cautionary tale to U.S. lawmakers before they go too far down a similar path.
First, the Online Safety Act requires that online services implement “highly effective” age verification to prevent children from accessing harmful and age-inappropriate content, and many did not realize how this would impact ordinary adult users. Many adults do not want to share their personal information to access websites, such as pornographic ones, concerned that doing so may compromise their online privacy. In response, some UK users have begun using virtual private networks (VPNs), encrypted online services that enable users to conceal their location, effectively circumventing the law by appearing to be non-UK users. The day after the Act went into effect, half of the top ten app downloads in the UK were for VPNs or identity verification apps. One of these apps, Proton VPN, claims sign-ups surged by more than 1,400 percent minutes after the law went into effect.
Second, the Online Safety Act applies to any online service that enables users to post content or interact with others, and many had underestimated how many services it would impact. Spotify, for example, now requires UK users to verify their age to disallow children from viewing music videos or lyrics of songs tagged for users 18 and older. Reddit added age verification for UK users to access discussion boards for hard cider and cigars. Other services have shut down completely, not just for UK users but for everyone, out of concern about potential liability for running afoul of the rules. For example, Urban Dead, a web-based multiplayer zombie apocalypse game, shut down out of concern for compliance with the new law. Other now-defunct sites include online forums for cyclists and another for sustainable living.
Third, the law covers content that is not just illegal but also “harmful,” including porn, self-harm, suicide, and eating disorder content, as well as abusive or hateful content, but many online services unsurprisingly struggle to accurately identify this content. While some content clearly falls into these categories, other content is less certain. However, online services risk penalties if they fail to remove content that regulators later deem harmful. As a result, they have an incentive to take down more content than is necessary to stay in compliance with the Act. This has made it more difficult for minors and adults to access helpful content related to mental health, suicide, addiction, eating disorders, sexuality, and more.
In the United States, policymakers at the federal level are considering several proposals, such as the Kids Online Safety Act and the App Store Accountability Act, that would impose similar obligations on online services in an effort to protect children. And even if Congress does not act, with the Supreme Court upholding Texas’s online age verification law, more states are likely to implement age verification bills. So what should U.S. policymakers learn from the UK?
First, proposals should balance children’s safety with adult privacy. Universal access to secure digital IDs would mitigate many of these problems—individuals could use a digital ID to verify that they are adults without sharing any other personal information—but Congress has shown little appetite for this solution so far. Expanding access to secure digital IDs for adults would make it much easier to build support for age verification. For example, state legislators could pair state-level digital ID initiatives with online age verification requirements. Or Congress could pass a federal online child safety law, but make it contingent on states offering digital IDs.
Second, proposals should avoid collateral damage. Narrowly targeting online services predominantly hosting harmful content, as opposed to those hosting any incidental harmful content, would minimize the risk of innocuous websites getting caught in the crosshairs of regulators. Children’s online safety laws will keep facing significant public backlash if their most visible impact is shutting down online forums about pets and sports.
Finally, U.S. policymakers should be cautious about extending restrictions to lawful content. While the United States has stronger protections for free speech than the UK, regulators will need to be crystal clear about what types of lawful but harmful content they want online platforms to restrict, to prevent them from unnecessarily limiting content. Similarly, proposals should provide “cure periods” so that if regulators find instances of non-compliance, online services acting in good faith would have the opportunity to remedy those issues rather than face immediate fines. This type of approach—focused on compliance rather than penalties—avoids creating incentives for platforms to remove too much.
Rather than following the UK’s lead on children’s online safety, U.S. policymakers should learn from their mistakes and chart a better path that skillfully preserves user privacy, limits collateral damage, and removes the incentives for online services to over-remove lawful content.