ITIF Search
Irish DPA’s Request to Meta Is a Misguided Move

Irish DPA’s Request to Meta Is a Misguided Move

June 27, 2024

The Irish Data Protection Authority (DPA) requested Meta pause its plans to train AI on public posts from its users last week. This request, instigated by complaints and pushback from the advocacy group NOYB (“none of your business”), is a shortsighted move that threatens to stifle innovation in developing AI systems.

The Irish DPA’s request overlooks several critical factors. First, public data, by its very nature, is meant to be accessed and utilised broadly. Social media users who choose to make their posts public generally understand and accept that their content can be viewed and repurposed in various ways, similar to how individuals who give public speeches expect their words to be recorded and shared. Facebook’s privacy policy clearly states that anyone can see or use public content.

Meta informed its users in the UK and Europe about how their information would be used for AI training. It sent notifications or emails and gave users the option to opt-out, thereby promoting transparency, and giving users more control. Despite these measures, NOYB urged Meta to ask its users for explicit consent, a demand that is impractical and reflects a fundamental misunderstanding of how AI models operate. Large language models, in particular, require vast datasets to function effectively. Collecting explicit consent from billions of users is not only infeasible but would also severely hamper technological progress. Moreover, using this public data for AI training has few privacy risks given existing safeguards, such as the General Data Protection Regulation (GDPR) in the EU and other relevant ethical guidelines. AI training incorporates various methods to protect privacy, including anonymization (removing personally identifiable information), data minimization (collecting and using only the necessary data), and data cleaning (ensuring that data is accurate, complete, and relevant by removing incorrect or unnecessary information).

In the case of Meta, having access to European data is crucial for creating AI models that work effectively across multiple countries. Predominantly English datasets limit AI systems’ ability to accurately understand and process other languages. Meta needs to develop AI models that cater to the diverse cultural and linguistic needs of European communities. Public posts on social media can provide a rich source of up-to-date language use, reflecting current trends and linguistic nuances. This broad linguistic and cultural data is key to building AI that serves all users effectively, ensuring that innovations in AI benefit everyone, not just English-speaking populations.

The GDPR already offers safeguards to ensure responsible data use. Article 6 of the GDPR allows for the processing of publicly available data under the principle of “legitimate interest,” without the need for explicit consent. This principle is crucial for the development and training of AI models, enabling companies to innovate while still protecting user rights. Recital 47 of the GDPR further supports this by stating that the legitimate interests of a data controller can provide a legal basis for processing, provided that these interests do not override the fundamental rights and freedoms of the data subjects.

The Irish DPA’s scrutiny of Meta appears to be a reactive measure to public concerns and calls made by activist groups rather than a solid interpretation of existing regulations. The Irish DPA would have much more impact proactively educating users on the implications of publicly sharing data on social media and how to leverage existing controls on these platforms rather than targeting Meta’s AI training program.

The Irish DPA should reconsider its position and support responsible data use that foster AI innovation. Public posts are inherently meant for broad consumption and utilisation. Allowing companies to harness this data responsibly can drive technological advancements that benefit all users. Rather than stifling progress, regulatory bodies should work alongside tech companies to establish frameworks that promote responsible innovation while educating the public about their data rights. The Irish DPA, along with other regulators in the EU, need to rapidly clarify this point if they hope to allow AI firms to operate in Europe.

Back to Top