ITIF Logo
ITIF Search
Protecting Children Online Does Not Require ID Checks for Everyone

Protecting Children Online Does Not Require ID Checks for Everyone

November 21, 2023

Making the Internet safe for children is a growing priority for policymakers. While many are coalescing around the idea of age verification requirements for parts of the Internet, there are many objections to this approach, including privacy risks, efficacy problems, and free speech concerns. But more importantly, mandatory age verification simply isn’t necessary to safeguard children online, and if policymakers are going to use legislation to require changes to online platforms, there are better alternatives that would empower parents and children without the risks of ID checks.

Many jurisdictions are pursuing online age verification requirements. In the United States, Arkansas, Louisiana, Ohio, and Utah have passed age verification laws to keep children off of social media and porn sites, and other states are considering similar measures. Many other countries are pursuing similar measures. Notably, the United Kingdom passed the Online Safety Act earlier this year which will require age verification not only for adult websites but any website or app that hosts content the UK Parliament deems “harmful” to children. France also approved a law in July that requires social media sites to use age verification and obtain parental consent before allowing children under 15 years old on their platforms. Finally, the EU has proposed legislation to address child sexual abuse material that would require age verification by all private messaging services, such as WhatsApp, as well as all app stores.

The mechanics for age verification differ by law. For example, Utah’s age verification law for adult websites requires using a government-issued ID for verification whereas its law for social media allows additional age verification options, such as facial analysis or a Social Security number. While all of these methods are likely to be more effective than the self-declarations found on many websites, there are many workarounds that underage users can use to circumvent these controls. For example, enterprising teens can easily find various tools and instructions online to create a fake scanned image of an ID card or use a VPN to evade location-specific age verification requirements entirely.

Many people—adults and children—are concerned about the implications of age verification laws for their privacy and free speech. Some people are worried about the privacy risks of sharing their government ID with a social media site, especially ones with potential links to foreign governments. Adult users may also feel uncomfortable sharing a copy of their government ID with adult apps and websites, out of concern that their personal viewing habits could be tracked and become public, or they may not have a government ID and thus be unable to access this material. In addition, requiring Internet users to disclose their IDs to access these sites reduces their anonymity, which courts have recognized is a core element of free speech. While universal access to secure electronic IDs in the United States would mitigate many of these problems—individuals could use these tools to share the fact that they are adults, but nothing else about their identities, therefore preserving anonymity—Congress has shown little appetite for this solution.

The problem today is that most websites and apps assume everyone to be an adult, which means that children can easily access adult content. Flipping that problem on its head—assuming everyone is underage until proven differently—fixes one problem but creates many others. A “third way” would be to assume everyone on the Internet is an adult unless they are marked as a child. Congress could achieve this middle ground by 1) requiring device operating systems to create a “trustworthy child flag” for user accounts that signals to apps and websites that a user is underage and 2) requiring apps and websites that serve age-restricted content to check for this signal for their users and block underage users from this content.

As an analogy, imagine the Internet is a mall, and some stores in the mall are intended only for adults because they sell items, like kitchen knives and trampolines, that are potentially dangerous to children. With today’s rules, anyone can go to the mall and shop in any store. At some of the adult-only stores, a guard might ask “are you an adult?” and if the user does not say “yes” they are told to leave. Age verification laws would make it so that nobody can enter the adult-only stores unless they show an ID. This alternative proposal would mean that anyone could shop at any store, unless they are wearing a bracelet that automatically identifies them as underage. Parents would decide whether to require their children to wear these tamperproof bracelets.

There are a number of benefits to this approach. First, it does not require anyone to disclose or verify their identity, which means it does not create new privacy risks by forcing users to share their government IDs or allowing online services to link their online activity to their offline identities. Second, it is low impact. Adult users could simply continue using the Internet as they do today. Similarly, the vast majority of websites and apps that are meant for the general public would not have to take any action. Third, it would be entirely voluntary for users. Parents who want to control what their children see on the Internet can choose to use this feature, and other parents can choose not to. The process would be opt-in—parents would be in control, and they could make decisions based on their own values and the maturity of their children.

This approach would also build on many of the tools that the private sector has already created. For example, Google and Apple offer a host of parental controls so that parents can manage their children’s accounts. On Apple devices, parents can both manage the amount of screen time for their children and also set age-related restrictions for different media, including apps, music, movies, and books. Google offers similar options for parents to manage their children’s online experience through its Family Link service. Platforms can continue to offer enhanced parental tools, but by creating a child flag, they can ensure that downstream apps and websites also have the information necessary to keep children safe.

Platforms appear willing to work with Congress on this issue. Meta’s Global Head of Safety, Antigone Davis, recently called for “federal legislation that requires app stores to obtain parents’ approval whenever their teens under 16 download apps.” She correctly notes that a patchwork of state laws on age verification will quickly become unmanageable, with platforms having to create different rules for different children depending on what state they live in. Solving this problem at the federal level makes much more sense. In addition, requiring each app or website to conduct their own age verification wastes time and resources for users and businesses—imagine those annoying cookie banners but one-hundred times worse. It is more efficient to determine once if a user account is associated with a child at the operating system level (or as Meta has proposed, the app store level) and then pass along this information to other apps.

Platforms could also do better by making their parental controls for child user accounts interoperable. Many children will use multiple devices across multiple platforms, such as a Chromebook, an iPhone, and a Kindle. And the number of devices will likely only continue to increase in future years to include smart TVs, gaming consoles, smart watches, and VR headsets. Developing parental controls that allow adults to easily manage their children’s online safety across multiple platforms will be crucial so they can provide sufficient oversight. In addition, digital literacy initiatives will be necessary to ensure that parents understand how to use these tools when they are available. And enabling these parental controls should be dead simple—something that parents can easily do with existing devices or when setting up a new one.

The biggest risk of this approach is that Congress could inadvertently (or intentionally) expose many innocuous apps and sites to liability for collecting information from children. The Children’s Online Privacy Protection Act (COPPA) imposes certain obligations on sites, even those directed at a general audience, when they have “actual knowledge” that they collect information from children under the age of 13. For example, apps and websites cannot collect information from those children without their parents’ permission. Requiring apps and websites to recognize a “child flag” could make many sites start complying with COPPA rules even if they do not host age-restricted content. Congress should amend this COPPA requirement so that websites directed at a general audience with common features, like user feedback forms or customer service chatbots, do not fall afoul of this rule.

Policymakers are right to take steps to safeguard children online, but mandatory age verification is not the best solution. Fortunately, a better option exists that would empower parents without the risks of mandatory age verification.

Back to Top