Watermarking and the Future of Trust in Generative AI

Generative models have advanced to the point where they can produce content that is nearly indistinguishable from human-created content. As these models continue to evolve, the detection of realistic fake content, also known as deepfakes, will only become more challenging, raising significant concerns about the authenticity and trustworthiness of digital media.
To mitigate these risks, recent regulations, such as the EU’s AI Act, advocate for watermarking techniques that can reliably distinguish between synthetic and authentic content. In response, both industry and academia are actively developing watermarking methods designed to be embedded within the generation process. However, it remains unclear whether current watermarking techniques are fit for purpose and whether they meet key technical and societal requirements for a real-world deployment.
Ayesha Bhatti speaks about watermarking for generative AI and the policy approaches to build trust online in the AI age at the University of Edinburgh's GAIL workshop.
Related
December 16, 2024
Why AI-Generated Content Labeling Mandates Fall Short
September 18, 2024