ITIF Logo
ITIF Search
The AI Act’s AI Watermarking Requirement Is a Misstep in the Quest for Transparency

The AI Act’s AI Watermarking Requirement Is a Misstep in the Quest for Transparency

July 9, 2024

The AI Act, formally adopted by the EU in March 2024, requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement — watermarking — may not be feasible or effective for some types of media. As the EU’s AI Office begins to enforce the AI Act’s requirements, it should closely evaluate the practicalities of AI watermarking to avoid subjecting AI providers to unreasonable and unworkable obligations.

AI watermarking is the process of embedding a distinct and unique signal, known as a watermark, into the output of an AI model, such as text, audio, or images. This signal identifies the content as AI-generated. Sometimes these watermarks are inconspicuous. For example, an AI watermark may be created by making imperceptible changes to an image that are invisible to the naked eye. Other times, the watermark may be noticeable, such as a visual symbol overlayed on an image. Ideally, watermarks should be tamper-resistant so that even if someone modifies the output, the watermark will remain.

There are two main methods for watermarking AI-generated content. In the first method, developers train their AI models to embed watermarks in their output as part of the generation process. In the second, developers apply a watermark after an AI model generates output. In either case, specialised algorithms can then detect whether a particular piece of content contains a watermark.

Article 50(2) of the AI Act mandates that providers of general-purpose AI systems ensure that their output is “marked in a machine-readable format and detectable as artificially generated or manipulated.” In addition, they must ensure “their technical solutions are effective, interoperable, robust, and reliable as far as this is technically feasible.” However, achieving all these objectives simultaneously with AI watermarking is challenging because enhancing one property often compromises another. For instance, increasing a watermark’s robustness usually requires making more prominent changes to the output, which can degrade content quality. Interoperability and reliability can also be in conflict. The lack of standardisation in AI watermarking technologies means that a watermark created by one system may not be readable by another, and developers are experimenting with different watermarking systems to find one that is reliable.

Some policymakers have touted AI watermarking as a universal solution across various media types for labelling content as AI-generated. For example, EU Commissioner for Internal Market Thierry Breton said in a speech, “…the European Parliament, the Council, and the Commission have a common understanding…on the need for transparency for generative artificial intelligence. To be clear, this involves identifying what is created by generative intelligence (images, videos, texts) by adding digital watermarking.” But these policymakers overestimate the capabilities of AI watermarking technologies, giving them more credit than they deserve. For example, one study has shown that it is easy to tamper with or remove watermarks in images, while reliably watermarking text may not even be possible. As even the European Parliamentary Research Service has found, “state-of-the-art AI watermarking techniques display strong technical limitations and drawbacks.”

Unfortunately, in the rush to pass the AI Act, EU policymakers did not carefully consider the technical complexities and limitations of AI watermarking. As one unnamed European Commission official told a reporter, the watermarking obligations were passed on the expectation that “over time the technology will mature.” But the reality is that nobody knows for sure. Watermarking may get better in the future, or it may prove to be a technological dead end. Either way, the AI Office must still decide how it will implement this law now with today’s technology.

To avoid any additional missteps, the AI Office should not let policy outpace the technology and instead only move forward with the AI Act’s watermarking obligations on specific types of media if the technology is provably secure and robust. Until then, moving forward with ineffective watermarks risks confusing consumers and detracting from other efforts to address misinformation and content provenance.

Back to Top