Lacking a Federal Standard, States Try and Fail to Solve Problems Faced by Kids Online
With 12 different comprehensive state data privacy laws (and counting), the United States is already facing a patchwork of legislation that will complicate compliance and cost businesses billions. Now many states are adding to the problem by trying their hand at crafting legislation to address the current tech topic du jour: children’s online safety and privacy. As with other areas of digital policy, this legislative patchwork demonstrates the need for a federal standard that will protect children without violating users’ free speech or privacy.
California got an early lead on children’s online safety and privacy legislation with the California Age Appropriate Design Code Act (CAADCA). The Act requires that online services that children are “likely to access”—not just services targeted at children—consider the best interests of children and prioritize children’s privacy, safety, and well-being over commercial interests. Online services must complete a “Data Impact Assessment” for each new product, service, or feature the company offers that, among other things, determines whether the product, service, or feature could subject children to “harmful or potentially harmful” content.
After California enacted the CAADCA in 2022, the tech industry association NetChoice sued the state of California, citing First Amendment violations. NetChoice argued that the law gives the California state government unconstitutional control over online speech by punishing online services if they do not protect underaged users from “harmful or potentially harmful” content and prioritize content that promotes minors’ best interests. The CAADCA does not define harmful content or content that promotes minors’ best interests, leaving platforms to guess regulators’ intentions and risk fines of up to $7,500 per affected child if they guess incorrectly. A federal judge granted a preliminary injunction against the law while the case goes forward.
This decision threw a wrench into the works of many other states that were discussing their own age-appropriate design codes, including Connecticut, Illinois, Maryland, Minnesota, Nevada, New Jersey, New Mexico, New York, Oregon, and Texas. The outcome of the case against the CAADCA will likely determine whether or how these states move forward with their own legislation.
Simultaneously, several states have introduced or passed legislation that would require social media platforms to verify the ages of their users and require parental consent for users under a certain age, either 16 or 18. Currently, most social media platforms only allow individuals over 13 to create an account, typically confirmed by users entering their date of birth, with no verification that they are telling the truth. As a result, many children under 13 lie about their age and create social media accounts anyway. Some social media platforms attempt to find underage users by allowing other users to report those accounts or using AI to detect age, but none of those solutions are foolproof.
So far Arkansas, Connecticut, Louisiana, Ohio, and Utah have passed age verification and parental consent laws for social media, and Pennsylvania and Wisconsin are considering similar legislation. The exact details vary state-by-state, but broadly speaking, all these bills and laws require social media platforms to verify the ages of users in those states and, if the user is under the specified age (either 16 or 18), obtain parental consent before allowing the user to create an account.
NetChoice sued Arkansas over its Social Media Safety Act, also called S.B. 396, once again citing First Amendment violations. In its complaint, NetChoice argued that the law infringes on minors’ free speech rights by denying them access to social media without parental consent and infringes on adults’ free speech rights by requiring them to prove their age in order to access these platforms, which are important tools for communication in the digital age. As in NetChoice’s case against California, a federal judge granted a preliminary injunction against the law.
Beyond potential First Amendment violations, social media age verification laws pose additional challenges. Platforms will need to determine how to verify users’ ages in ways that comply with the various state laws, which in most cases offer only vague guidance. Platforms may have to collect and retain information from users’ government-issued IDs, which include not only an individual’s date of birth—something users already must provide to create an account on many social media platforms—but additional sensitive personal data including an individual’s full name and address.
This data collection would pose privacy risks that may deter some users from participating in social media, particularly those who value their anonymity, which often includes members of vulnerable populations. For example, some people may be comfortable with TikTok having information about which videos they watch, but less comfortable with the Chinese-owned company having a copy of their government IDs. Additionally, adults without a form of government-issued identification would lose access to social media, accounting for as many as 7 percent of Americans, with rates even higher among lower-income individuals, Black and Hispanic individuals, and young adults.
The landscape of state legislation addressing children’s online safety and privacy demonstrates not only the difficulty of regulating social media and other online services but also the need for a federal standard. These state laws come from a noble place—the desire to protect children—but have serious flaws and threaten to do more harm than good. A federal law could replace this patchwork approach with a single national standard but only if Congress can strike the right balance between the concerns of parents, the rights of children and other users, and the technical limitations of social media platforms.