
AI and Kids’ Safety Need Separate Solutions, Not New Problems
If the United States wants to continue leading the world in digital policy, it needs to solve key policy debates, from AI regulation to children’s online safety, in targeted ways designed to facilitate innovation, streamline regulation, and address specific harms without stepping on individuals’ rights. Senator Marsha Blackburn (R-TN) claims her TRUMP AMERICA AI Act, released as a discussion draft on March 18, 2026, will accomplish all of this at once. However, the draft proposal combines several counterproductive policies that would set the debate over how best to protect children online several steps backward, hindering AI innovation in the process.
To protect children, the TRUMP AMERICA AI Act includes several controversial provisions, some of which are only tenuously related to AI. These include provisions drawn from the Kids Online Safety Act (KOSA) and a provision to sunset Section 230 of the Communications Decency Act, a law which limits platforms’ liability for user-generated content while safeguarding their right to moderate harmful or objectionable material. The legislation would also establish a duty of care for AI chatbot developers to prevent and mitigate foreseeable harms, ban minors from using AI companions, and require chatbots to verify users’ ages.
Sen. Blackburn originally introduced KOSA alongside Sen. Richard Blumenthal (D-CT) in 2022. KOSA would establish a “duty of care” requiring online services reasonably likely to be used by minors to ensure their design features prevent and mitigate harm to them. However, this language is vague and undefined by existing case law, which would complicate compliance. Online services may overcorrect and make it more difficult for minors, or potentially all users, to access helpful content related to mental health, suicide, addiction, eating disorders, sexuality, and more. The duty of care provision may even violate the First Amendment, as the government cannot dictate an online service’s editorial decisions, which could include design features.
Many of KOSA’s other provisions would meaningfully protect children online, including requirements for certain parental controls and greater transparency into their safeguards, personalized recommendation systems, and advertising practices. However, as the name implies, KOSA is an online safety bill, not an AI bill, and does not belong in a national AI framework. Instead, Congress should continue to iterate on the bill separately, as packaging it with other, unrelated proposals only hinders substantive debate, limiting its potential positive impact.
The TRUMP AMERICA AI Act’s duty of care for AI chatbot developers mirrors KOSA’s language, requiring developers to “exercise reasonable care in the design, development, and operation” of their chatbots to prevent and mitigate “reasonably foreseeable” harms. As with KOSA, this vague language and broad scope expose developers to a minefield of litigation, requiring them to dedicate extensive time and resources toward compliance and legal defense and away from innovation and safety features.
Sunsetting Section 230 would only further contribute to a fraught legal landscape for an even wider variety of online services. Lawmakers have blamed Section 230 for any number of online problems, and some seek to remove Section 230 protections in order to punish “Big Tech” companies for various grievances. However, large tech companies are not the only ones that benefit from Section 230. Any online service that hosts user-generated content relies on Section 230, as well as everyday Internet users, when they forward emails or reshare social media posts.
It is unclear how Section 230 applies to AI-generated content, a nuanced issue that does not necessitate throwing out the legal framework that has positively shaped the U.S. digital economy over the last three decades. The main result of sunsetting Section 230 would be an influx of frivolous lawsuits against online services that host user-generated content. Because of the First Amendment, online services would likely win most of these lawsuits, but not without going through the expensive litigation process.
Facing two duties of care and without Section 230’s legal shield, online services with fewer resources could go out of business entirely. Others would either need to raise prices or charge users for services that were previously free in order to recoup costs. Others may implement strict guidelines that censor controversial forms of speech. These outcomes would not only fail to address specific AI-related harms or harms to children, but they would also hinder AI innovation.
A ban on minors using AI companions is also the wrong approach, as doing so would remove access to tools that can provide real benefits to children without addressing the underlying risks. Many young people already use AI companions for constructive purposes, such as receiving academic help, practicing social skills, or seeking emotional support in a low-pressure, judgment-free environment. Cutting off access entirely would eliminate these positive use cases without effectively targeting or even identifying actual harms. It would also halt the development of AI companions more suited for children by American firms, leaving that market to foreign competitors.
At the same time, the TRUMP AMERICA AI Act’s blanket ban creates new problems by requiring intrusive age verification for all AI chatbots, not just AI companions, raising serious implementation challenges and privacy concerns for users of all ages. Requiring users to provide personal data in order to access useful AI tools is likely to deter many who have understandable concerns about the privacy of their data, especially in the case of AI tools with potential links to foreign governments. Users may also have concerns that their private interactions with AI chatbots and companions would be linked to their real identity.
Taken together, the children’s safety provisions within the TRUMP AMERICA AI Act reflect a fundamental mismatch between the bill’s stated ambitions and its design. By imposing sweeping and ill-defined duties of care, eliminating longstanding liability protections, and placing blunt restrictions on minors’ access to AI tools, the bill would chill innovation, undermine free expression, and create new privacy and compliance burdens that detract from any provisions that would meaningfully improve child safety.
A more effective approach would separate AI governance from broader online safety debates, preserve the legal frameworks that have enabled the Internet economy to flourish, and pursue targeted, evidence-based policies that address specific harms. If the United States aims to lead in AI while protecting youth online, policymakers should resist the urge to legislate broadly and instead craft precise, balanced solutions that can achieve both goals without sacrificing either.
