ITIF Logo
ITIF Search
Why Watermarking Text Fails to Stop Misinformation and Plagiarism

Why Watermarking Text Fails to Stop Misinformation and Plagiarism

September 18, 2024

The rise of generative AI, with tools like ChatGPT, has revolutionized how people create text. These tools can produce content that closely resembles human writing, allowing users to quickly draft fictional stories, news articles, meeting summaries, and personal essays. However, people can misuse the technology for harmful purposes, such as plagiarising works and generating fake news. To address these risks, some countries have considered requiring AI system providers to label their outputs as AI-generated content. One proposed method is watermarking, which involves embedding a distinct and unique signal in the AI content. However, watermarking AI-generated text is not only ineffective on a technical level but also ineffectual for tackling issues like misinformation or plagiarism.

Text watermarking for generative AI hides information within the text output of an AI system. Computers can then check if the text has a watermark and confirm whether it was made using an AI tool. Ideally, the watermark is imperceptible to readers to avoid impacting the quality of the output and to make it harder for anyone to remove it. However, watermarking for text is far more complex than watermarking for other media types. For example, watermarking images is easier than text because pixels can be subtly altered whereas changing letters in words (or words in paragraphs) risks creating gibberish. To solve this problem, the text watermarking method works by changing the likelihood that an AI system produces certain word choices, but only in ways that do not reduce the quality or meaning of the text. These small adjustments create a hidden pattern that acts as a watermark. To be effective, watermarks should be robust enough that they remain detectable even if users alter the content. However, users can easily remove a watermark by rewriting or paraphrasing text, translating it into another language, or generating short responses. In addition to the technical limitations, text watermarking fails to address the underlying issues of plagiarism and fake news.

As for plagiarism, watermarking will not prevent motivated students from finding other ways to cheat, such as hiring someone to write their assignments. Instead of relying on this technical solution, schools should emphasize the value of original thinking and academic integrity. If generative AI becomes a common tool for students to produce their assignments, current methods of knowledge assessment may no longer be appropriate and require further review by educational organizations. For example, educators should focus on integrating generative AI into learning, encouraging responsible use rather than stigmatizing or banning it.

In terms of fake news, even when a watermark is present, it offers no insight into the truthfulness of the content. Moreover, most people do not scrutinize every piece of information that crosses their path and often share information that confirms their biases or appeals to their emotions. More importantly, much false information originates from non-AI sources. Watermarking requirements for AI systems will never address the entire problem and risk unfairly stigmatizing legitimate AI-generated content. Combating misinformation requires broader strategies, such as enhancing media literacy, improving content moderation on social media, and deploying content-tracing solutions that work for all digital content.

Not only is text watermarking technically flawed, but it is also ineffective in curbing plagiarism and the spread of misinformation. It does not address the underlying issues that lead people to share and believe in fake news or students cheating on assignments. If policymakers want to address these issues, they should not rely on technically limited solutions like text watermarking but should pursue more comprehensive strategies.

Image Credit: Kelsea Petersen/NBC News

Back to Top