Publications: Justyna Lisinska
October 18, 2024
Audio Watermarking Won’t Solve the Real Dangers of AI Voice Manipulation
Audio watermarking won’t mitigate the risks associated with AI-generated voice cloning. The challenge isn’t only technical but also social—how people consume and trust media.
September 25, 2024
Draghi’s Competitiveness Report Shows Why the EU Needs a Pro-Innovation Approach Towards AI
The EU should adopt a more flexible, innovation-driven approach to AI regulation to boost global competitiveness, according to the European Competitiveness Report by Mario Draghi.
September 18, 2024
Why Watermarking Text Fails to Stop Misinformation and Plagiarism
The rise of generative AI tools, such as ChatGPT, has transformed text creation but also raises concerns about misuse, including plagiarism and misinformation. While some propose watermarking to label AI-generated content, this approach is technically flawed and ineffective, failing to address the root causes of these issues; instead, more comprehensive strategies are needed to combat misinformation and promote academic integrity.
August 15, 2024
Watermarking in Images Will Not Solve AI-Generated Content Abuse
Advances in generative AI make it easy to create realistic digital images, but they also raise concerns about misuse, such as spreading misinformation and copyright infringement. While policymakers are considering watermarking as a solution, this approach has significant limitations and won't fully address the broader issues; instead, efforts should focus on media literacy and better methods for tracing content origin.
August 8, 2024
Blaming Social Media for Political Violence in the UK Won’t Stop Future Riots
In response to the Southport stabbing and ensuing UK riots, many have blamed social media for spreading misinformation that incited violence. However, this focus on social media ignores deeper societal issues and deflects from the government's own shortcomings.
July 9, 2024
The AI Act’s AI Watermarking Requirement Is a Misstep in the Quest for Transparency
The AI Act requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement — watermarking — may not be feasible or effective for some types of media.
June 27, 2024
Irish DPA’s Request to Meta Is a Misguided Move
The Irish Data Protection Authority (DPA) requested Meta pause its plans to train AI on public posts from its users last week. This request, instigated by complaints and pushback from the advocacy group NOYB (“none of your business”), is a shortsighted move that threatens to stifle innovation in developing AI systems.