ITIF Logo
ITIF Search
AI Companions Risk Over-Regulation with State Legislation

AI Companions Risk Over-Regulation with State Legislation

May 21, 2025

Even before the recent popularization of artificial intelligence (AI), humans have anthropomorphized and imagined relationships with technology. This cycle continues with AI companion chatbots—and state policymakers are paying attention. But instead of legislating based on a real understanding of the benefits and potential harms of AI companions, states risk over-regulating—even going so far as to ban minors’ access to the technology altogether—before the technology can reach its full potential.

AI chatbots are designed to engage in conversations with humans through text, voice, or video. But AI companions take that technology a step further, as they are AI systems designed to show empathy, ask personal questions, and provide emotional support.

AI companions present an avenue for users to try out different aspects of their personality that they may not feel comfortable displaying openly in real life, and can provide a “safe forum” for this sort of exploration. For example, AI companion tutors can provide students with judgement-free academic support personally tailored to the student’s unique learning style and pace. Or, they can serve as a therapist, helping users process emotions, identify patterns in their thinking processes, and offer coping strategies. In addition, AI companions are available 24/7 for support, including times when vulnerable or lonely users may not be able to talk to a real-life friend or family member.

However, while AI companions are available 24/7, poorly designed companions risk creating feedback loops, as AI companions can simply respond with what the user wants to hear, leading to sycophantic behavior. This could create unhealthy parasocial relationships—one-sided emotional attachments—which children are particularly at risk of developing because of their state of cognitive development. While this confusion is a normal part of child development and these relationships are not unusual, AI companions could exacerbate the issue by making fictional or virtual characters seem real. There are also concerns that increased chatbot usage will lead to unhealthy overuse, reliance, or addiction to the technology, especially for children and vulnerable adults.

Though most of the state legislation surrounding AI companions is based around children’s usage, children are not the only ones who can benefit from AI companions, nor are they the only demographic using them. As a result, many states are starting to introduce bills focused only on the risks of AI companions.

Table 1: Comparison of state AI companion bills’ provisions.

 

Requires Disclosure

Addresses Suicidal Ideation / Self-Harm

Requires Age Verification

Bans Usage by Minors

Targets Reward Systems

Targets Addiction

New York

     

California

   

 

North Carolina

   

 

Minnesota

   

   

Utah

       

Some states try to address fears that this technology will lead to addictive behaviors, such as New York’s A6767. These fears have been mirrored for decades across all sorts of technologies, from television to video games to social media. New York’s bill also requires platforms to take “reasonable steps” to prevent a “companion chatbot from providing rewards to a user at unpredictable intervals…or from encouraging increased engagement.” Just like video games, design features that resemble gambling, such as loot boxes, tokens, and virtual rewards, target children’s inability to understand that they are being manipulated to spend money or more time playing a game, or in this case, chatting with an AI companion.

In addition, most states require clear disclosures to users that the AI companion is non-human. North Carolina's SB 624 bill goes further to disclose that chatbots are “incapable of experiencing emotions such as love or lust.” Most states—such as California’s SB 243—also advocate for protocols addressing suicidal ideation or self-harm expressed by the user, targeting some of the mental health implications of AI companions.

While not narrowly focused on children, Utah’s HB 452 focuses specifically on mental health chatbots. The bill, which was signed into law on March 25, 2025, requires certain safeguards such as the oversight of licensed human mental health professionals and regular testing and review of the chatbot’s performance.

Tackling platforms’ virtual reward systems that incentivize additional time online, requiring clear disclosures, and involving licensed professionals in the design and testing of mental health chatbots are much more reasonable approaches than more sweeping and restrictive measures that other states have considered.

For example, North Carolina’s bill gives platforms a “duty of loyalty” to their users. This “duty” language mirrors how policymakers are approaching children’s online safety more broadly, such as the Kids Online Safety Act’s “duty of care” provision. The language of this provision is vague and undefined by existing case law, and as online services attempt to comply with a “duty of care” or “duty of loyalty”—and avoid liability—they may overcorrect, making it more difficult for minors, or potentially all users, to use AI companions.

Meanwhile, Minnesota’s SF 1857 goes farther than any other state by banning minors from accessing chatbots for recreational purposes. But blanket bans for one particular age group are especially bad policy, as they take away the benefits of AI companions for young people, stumble into the pitfalls of online age verification, and are ineffective for children who skirt bans using VPNs or their parents’ information.

As more states consider AI companion legislation, this technology has become part of the larger conversation about protecting kids from supposed online harms, the “duty” of platforms to protect children, age verification, and addictive technology fears. While these state bills head to various committees, and potentially more states introduce AI companion legislation, states should avoid bills that would ban children’s access to chatbots, or any other potentially beneficial technology. States should also avoid creating vague “duties” for platforms that are likely to have unintended consequences for users of all ages. Policymakers need to better understand the positives of AI companions and chatbots before addressing the negatives.

Back to Top