Hawley’s Proposal to Protect Children on Social Networks Is a Misfire

Daniel Castro October 18, 2019
October 18, 2019

(Ed. Note: The “Innovation Fact of the Week” appears as a regular feature in each edition of ITIF’s weekly email newsletter. Sign up today.)

All too often, well-intentioned proposals aimed at protecting children result in rules that could limit innovation for the very population they are intended to help. The most recent example of this comes from Sen. Josh Hawley (R-MO), who has introduced legislation to stop social media sites from recommending any amateur video of a person under the age of 18—be it the latest child music prodigy or a viral video of a toddler’s antics.

Concerns about child safety online are certainly justified. In June, the New York Times reported that researchers at the Berkman Klein Center for Internet & Society and Harvard University found that YouTube’s recommendation algorithms were recommending videos of prepubescent, partially clothed children to people seemingly motivated by sexual interests. In response, Sen. Hawley has introduced S. 1916, the Protecting Children from Online Predators Act, which would make it illegal for any app or website hosting user-submitted video content to “recommend any video to a user of the covered interactive computer service if [the service provider] knows, or should have known, that the video features 1 or more minors.” Violators of the act could face fines up to $10,000 per day or up to five years in prison.

While policymakers should do more to stop online predators and pedophiles, such as by providing law enforcement with more resources to investigate and prosecute the perpetrators, Hawley’s proposed legislation is not the right strategy. The bill would penalize social media platforms for recommending videos with any minors. Notably, the bill makes platforms liable for doing this both for videos “designated by the user that uploaded the video as featuring 1 or more minors” and videos “incorrectly designated by the user that uploaded the video as not featuring 1 or more minors.” That requirements means platforms would be liable for recommending any video where a child appears even briefly in the background.

The biggest impact of this legislation would be on apps and online services popular among teenagers. The video game streaming website Twitch, for example, would not be able to recommend teenage streamers to its users. And TikTok—the wildly popular social media site that features short video clips of Gen Z users lip-syncing or acting out sketches—would not be able to recommend much of its content.

The proposal would also impact how parents share videos about their children, as well as how teenagers share their own content, on social networks. Under this proposal popular teenage YouTubers, such as 15-year-old Matthew Morris, who has 12.6 million subscribers, would not have the followings, or earn the revenue, that they have today. Recommendations on these platforms can have a big impact on these young artists. Talent manager Scooter Braun discovered a 12-year-old Justin Bieber because of a YouTube recommendation and launched him into pop stardom.

Moreover, this legislation is premature because social media sites have not been ignoring the problem. For example, in the first quarter of 2019, YouTube removed over 78,000 channels and 800,000 videos that violated its child safety policies. It also announced multiple initiatives at the beginning of the year to better protect children on its platform, including disabling comments on videos featuring young minors and using automated tools to better detect and remove predatory comments.

Hawley’s bill may be well-intentioned, but it is not the right approach for addressing concerns about online child safety. Too often attempts to address child safety end up being reactionary and limiting children’s access to new technology. For example, when Facebook released its child messaging app, Messenger Kids, in 2018, several organizations and lawmakers joined together almost immediately to call for the company to take down the app, arguing all social media use harms children, a claim simply not supported by research.

Policymakers’ goal should be to protect children, and while Congress should provide oversight of businesses impacting children, it should leave the configuration of social networks, including their child safety measures, to the technologists, designers, and researchers with the expertise to most effectively address these issues.