
Bans on AI Companions Hurt the Kids They Aim to Protect
If it feels like AI companion chatbots are dominating the news cycle, that’s not a hallucination. Policymakers are increasingly concerned that children are forming unhealthy and even potentially dangerous emotional attachments to AI companion chatbots and are rushing to ban their use. These bans are troubling because they would hinder young people’s ability to use chatbots for legitimate purposes, require AI companies to implement problematic age verification mechanisms, and often end up regulating general AI chatbots in addition to AI companions. Instead, policymakers should focus on increasing parents’ and children’s autonomy through better parental controls and customization.
In the past month, California passed SB 243, a first-of-its-kind law regulating AI companions. In response, Character.AI, a major AI companion platform, announced a ban for users under 18, and OpenAI announced new service updates for its teen accounts.
In Congress, Senator Josh Hawley (R-MO) introduced the GUARD Act, co-sponsored by Senators Richard Blumenthal (D-CT), Katie Britt (R-AK), Mark Warner (D-VA), and Chris Murphy (D-CT), which would ban AI companions entirely for youth under 18. Then, in September, following a Senate Judiciary hearing and an inquiry by the Federal Trade Commission (FTC), Senator Jon Husted (R-OH) introduced the CHAT Act, which would ban youth from using AI companions unless their parent or guardian registers the account.
And this week, the Oversight and Investigations Subcommittee of the House Energy and Commerce Committee is holding a hearing, “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots.” This flurry of federal and state action signals a growing appetite to restrict or even eliminate youth access to AI companions, raising urgent questions about whether sweeping bans would do more harm than good.
First, while poorly designed AI companions can cultivate parasocial behaviors, banning the technology altogether would cut youth off from the 24/7 support that safe AI companions can provide. According to a recent survey, 72 percent of youth have tried using AI companions at least once. Many use them because they are fun and because they are curious about the technology. Most see them as a tool, not a friend, but many still use them to seek out advice, practice social skills, or for emotional or mental health support.
AI companion tutors can provide students with judgment-free academic support personally tailored to the student’s unique learning style and pace. They can also serve as therapeutic tools, helping users process emotions, identify patterns in their thinking processes, and develop coping strategies. AI companions can provide a “safe forum” for young users to explore different aspects of their personality that they may not feel comfortable expressing in real life.
Second, bans on AI companions for children would require these services to implement age verification mechanisms for all of their users, which presents privacy risks for adults. Different verification methods come with varying trade-offs, including requiring users to disclose their real identities to use these services. Existing age verification laws have typically focused on adult websites and social media platforms, but these new bills suggest chatbots are the next target.
Another challenge is that age verification requirements often apply only to certain types of services, such as AI companions that engage in sexual or violent conversations. But determining those lines is not easy; reasonable people may disagree, which can result in these requirements restricting too many positive uses or too little harmful content.
Lastly, most legislation lumps all AI chatbots together, regulating more technology than intended. AI chatbots are designed to engage in conversations with humans through text, voice, or video. But AI companions take that technology a step further, as they are AI systems designed to show empathy, ask personal questions, and provide emotional support.
“Chatbot” has become a convenient catchall for many AI tools, even if it blurs important distinctions and makes clear definitions hard to draw. For example, while the CHAT Act claims to be focused on AI companions, its language would sweep in ChatGPT, Google’s Gemini, Anthropic’s Claude, and even Amazon’s Echo or Apple’s Siri—none of which are designed to be AI companions.
A better approach for policymakers to keep children safe when using AI companion chatbots would be to focus on providing more transparency about the types of conversations AI companions might engage in and improving parental controls to allow for greater customization. State and federal policymakers alike should not rush to ban AI companions but instead work with industry to prioritize safe use cases, parental control, and children’s autonomy.
