
Ten Ways Policymakers Should Respond to the Grok Bikini Fiasco
It’s a new year, and Internet policy is already off to a rocky start. In recent days, a flood of users on X has prompted Grok, xAI’s chatbot, to generate sexualized images of real people—mostly women and girls in skimpy clothing, such as bikinis, and in provocative poses—without their consent. The backlash has been swift. Many have rightly condemned the individuals making these requests, and governments in countries including the United Kingdom, France, India, and others have announced investigations.
This moment raises a familiar question: When people weaponize new technologies to harass, exploit, or abuse others, how should policymakers respond?
The seriousness of these incidents will understandably prompt calls for decisive action. But policymakers should resist the impulse to rush toward sweeping, AI-specific rules. Blunt regulatory responses risk unintended consequences, including undermining free expression, creating unenforceable mandates, and generating cross-border legal conflicts. Instead, they should focus on enforcing existing laws, closing genuine gaps, and responding in ways that are targeted, effective, and proportionate to the harms involved.
1. Aggressively Enforce Child Sexual Abuse Material (CSAM) Laws
Some of the most serious allegations involving Grok concern images of minors. Creating, possessing, and distributing these types of images is illegal in most countries. For example, under U.S. federal law, it is a crime to create or distribute CSAM depicting a real child, regardless of whether the image is authentic or generated using AI. (Images generated by AI that do not depict a real child can be prosecuted under federal obscenity laws.)
Importantly, CSAM is not limited to nude images, so Grok’s alleged sexualized images of children could violate these laws. In the United States, courts apply the Dost test to assess whether non-nude depictions of children are sexually exploitative and therefore illegal. Policymakers should ensure that law enforcement agencies have the resources and expertise needed to investigate and prosecute these crimes swiftly and consistently.
Because the conduct at issue is already illegal, the most effective response is not new legislation but rigorous enforcement of existing law.
2. Enforce Harassment and Cyberstalking Laws
Last year, Congress passed the TAKE IT DOWN Act, which made it illegal to knowingly share non-consensual intimate images (NCII) of adults, including images created using AI. However, non-consensual sexualized images of adults—such as Grok’s “bikini photos”—can still cause serious harm even when they do not meet the federal legal threshold for NCII. That does not mean the law is silent on this behavior.
Existing harassment and cyberstalking laws already provide avenues for enforcement. Individuals who create and distribute such images to intimidate, humiliate, or target others should face prosecution where the law permits. These protections apply online just as they do offline, including in workplaces and educational settings under Title VII and Title IX in the United States.
Recent convictions in France for online harassment underscore that these laws are not merely symbolic and can be enforced, particularly when authorities prioritize them. Ensuring that these protections extend beyond high-profile cases requires law enforcement to have the training and resources needed to respond effectively to online harassment faced by ordinary individuals as well.
3. Understand What Section 230 Does—and Does Not—Do
Section 230 of the Communications Decency Act is often blamed for online harm, but its role is frequently misunderstood. Section 230 is based on a simple principle: Everyone is responsible for their own speech, not someone else’s.
Crucially, Section 230 does not provide immunity from federal criminal law. It does not shield anyone from prosecution for obscenity or sexual exploitation of children. Policymakers should resist any calls to weaken Section 230 in response to harms it was never designed to address.
4. Hold Services Accountable for Their Own Conduct
Section 230 also does not protect platforms from liability for their own speech or actions. That distinction matters in the context of generative AI.
When a chatbot itself generates and publicly posts images in response to user prompts, the service should bear responsibility for the legality of those outputs. This differs from cases where users independently create content using tools and then choose whether to distribute it themselves.
Policymakers should note this distinction. It allows accountability where appropriate, without collapsing into a rule that treats every toolmaker as the speaker of everything a user does.
5. Defend Free Speech Values
Free expression inevitably includes lawful but offensive speech. Different countries will impose different limits, but in a free society, not everything that is harmful or distasteful should be illegal. Government censorship carries real risks, particularly when rules are vague or overbroad.
That does not mean society must tolerate bad behavior. Individuals can and do face consequences for lawful speech, including reputational damage or loss of professional opportunities. The same social norms should apply to AI-generated speech. Public condemnation and private accountability remain powerful tools.
6. Update Digital Literacy, Sex Education, and Anti-Bullying Programs
Schools already address issues like bullying, online safety, and consent. AI raises new challenges within those same domains.
Educational programs should teach students about the harms caused by creating or sharing non-consensual images, the legal risks involved, and the lasting personal consequences. Learning how to manage risks associated with AI is an essential skill in today’s digital society.
7. Make Exploitative Services Unwelcome
The Grok incident may be new, but the underlying behavior is not. For years, online services that allow users to undress people in photos have operated under the radar, often with minimal safeguards and inconsistent enforcement of their own terms.
Although they are illegal in some places, eliminating these tools entirely from the Internet is likely impractical. The technical capability now exists and cannot be uninvented. Policymakers should instead focus on making such services difficult to sustain by encouraging responsible intermediaries—payment processors, advertisers, hosting providers, and cloud services—to refuse to support them.
This approach has proven effective in other contexts and avoids creating new speech restrictions.
8. Adopt Tech-Neutral Policy Responses
A common mistake in technology policy is to fixate on the tool rather than the harm. Fake images, harassment, and sexual exploitation long predate AI and can occur with or without it. While non-consensual sexualized images of adults may not rise to the same level of severity as crimes involving children or physical violence, they can still cause meaningful personal, professional, and psychological harm to the individuals targeted.
At the same time, societal norms and expectations often evolve in response to new technologies. As AI-generated media becomes more common and more easily recognized, some of the initial shock and stigma associated with certain types of synthetic content may diminish over time. That evolution, however, does not eliminate the need for accountability when such tools are used to harass, humiliate, or target others.
Policymakers should therefore ensure that laws do not create AI-specific loopholes—or non-AI loopholes—by treating similar harms differently based on how they are produced. Tech-neutral rules grounded in outcomes, not mechanisms, are more durable, more flexible, and ultimately more fair.
9. Respect National Sovereignty
Countries have different legal traditions and speech norms. In the absence of global consensus, governments should avoid imposing their content standards extraterritorially.
Indeed, one source of transatlantic tension has been efforts by the UK and the EU to impose their Internet content moderation preferences on the United States through the Online Safety Act and the Digital Services Act, respectively.
Geographically limited takedowns and enforcement actions respect national sovereignty while allowing countries to uphold their own laws.
10. Encourage Voluntary and Market-Based Solutions
Most online services are good actors that do not want to be associated with harmful content. Industry-led best practices, codes of conduct, and voluntary safeguards often adapt faster than formal regulation.
Market pressure also matters. Advertisers, users, and business partners can and do withdraw support from platforms that fail to address public concerns. Policymakers should allow these dynamics to work before reaching for blunt regulatory tools.
A Measured Response Is the Right Response
The misuse of Grok is disturbing and underscores the persistence of online abuse, but it does not justify abandoning long-standing principles of free expression, innovation, and tech-neutral policymaking. Society has faced similar challenges before as new technologies lowered the cost of harmful behavior. Each time, the most effective responses combine enforcement, norms, education, and targeted accountability—rather than reactive, ill-defined rules that create new legal conflicts while leaving victims no better off.
AI should be no different.
Related
January 28, 2024
Blame Lawmakers, Not AI, for Failing to Prevent the Fake Explicit Images of Taylor Swift
July 15, 2015
Why and How Congress Should Outlaw Revenge Porn
August 13, 2015
