Justyna Lisinska
Justyna Lisinska was a policy analyst at the Center for Data Innovation. Previously, she worked as a policy research fellow at King's College London, where she developed a policy programme for the UK's largest project on autonomous systems. She also has experience working within the government and with government officials. Justyna holds a Ph.D. in Web Science from the University of Southampton.
Research Areas
Recent Publications
Why AI-Generated Content Labeling Mandates Fall Short
Mandatory labeling for AI-generated content, particularly through watermarking, is neither a reasonable nor effective solution to the issues policymakers seek to address. Rather than singling out AI-generated content, policymakers should prioritize building trust within the digital ecosystem as a whole.
Digital Transformation Should Be at the Heart of the UK’s Economic Agenda
The UK stands at a critical moment when embracing digital transformation, AI, and data innovation is not just an opportunity but also a necessity. By implementing forward-thinking policies, the UK can not only drive economic growth but also position itself as a global leader in emerging technologies.
Key Facts Missing in the Creative Community’s Statement on Unlicensed AI Training
Thousands of creators signed a statement opposing AI's unlicensed use of creative works, calling it a threat to livelihoods. But this overlooks that AI training uses public data within established norms, creators resist adapting to change, and copyright already protects against unauthorised use.
Audio Watermarking Won’t Solve the Real Dangers of AI Voice Manipulation
Audio watermarking won’t mitigate the risks associated with AI-generated voice cloning. The challenge isn’t only technical but also social—how people consume and trust media.
Draghi’s Competitiveness Report Shows Why the EU Needs a Pro-Innovation Approach Towards AI
The EU should adopt a more flexible, innovation-driven approach to AI regulation to boost global competitiveness, according to the European Competitiveness Report by Mario Draghi.
Why Watermarking Text Fails to Stop Misinformation and Plagiarism
The rise of generative AI tools, such as ChatGPT, has transformed text creation but also raises concerns about misuse, including plagiarism and misinformation. While some propose watermarking to label AI-generated content, this approach is technically flawed and ineffective, failing to address the root causes of these issues; instead, more comprehensive strategies are needed to combat misinformation and promote academic integrity.
Watermarking in Images Will Not Solve AI-Generated Content Abuse
Advances in generative AI make it easy to create realistic digital images, but they also raise concerns about misuse, such as spreading misinformation and copyright infringement. While policymakers are considering watermarking as a solution, this approach has significant limitations and won't fully address the broader issues; instead, efforts should focus on media literacy and better methods for tracing content origin.
Blaming Social Media for Political Violence in the UK Won’t Stop Future Riots
In response to the Southport stabbing and ensuing UK riots, many have blamed social media for spreading misinformation that incited violence. However, this focus on social media ignores deeper societal issues and deflects from the government's own shortcomings.
The AI Act’s AI Watermarking Requirement Is a Misstep in the Quest for Transparency
The AI Act requires providers of AI systems to mark their output as AI-generated content. This labelling requirement is meant to allow users to detect when they are interacting with content generated by AI systems to address concerns like deepfakes and misinformation. Unfortunately, implementing one of the AI Act’s suggested methods for meeting this requirement — watermarking — may not be feasible or effective for some types of media.
Irish DPA’s Request to Meta Is a Misguided Move
The Irish Data Protection Authority (DPA) requested Meta pause its plans to train AI on public posts from its users last week. This request, instigated by complaints and pushback from the advocacy group NOYB (“none of your business”), is a shortsighted move that threatens to stifle innovation in developing AI systems.