
The EU Should Resist Calls to Regulate AI Under the DMA
A growing number of European policymakers are calling for the EU to extend the Digital Markets Act (DMA) to new industries. Most notably, and despite the DMA’s already mixed results, France, Germany, and the Netherlands have issued a joint statement advocating for its expansion to AI and cloud services. However, broadening the DMA’s scope would impose unnecessary constraints on AI firms, deter investment, and weaken the West’s position in the global AI race against China. This push also contradicts the messaging from the recent Paris AI Summit, where leading European politicians made highly publicized statements warning that excessive regulation can stifle innovation. In this context, European policymakers should reconsider their rush to bring AI and cloud under the DMA’s heavy-handed framework.
Proposals seeking to classify AI-driven services as Core Platform Services under Article 2(2) of the DMA would subject AI providers to restrictive obligations designed for entrenched gatekeepers. In practice, this move would constrain how companies deploy their AI services. For instance, the DMA’s data misappropriation rules could limit AI model development by restricting firms from using proprietary data for training. Likewise, interoperability mandates might force certain AI developers to provide open access to their models, eroding competitive differentiation and discouraging innovation. Meanwhile, self-preferencing restrictions could hinder how tech platforms integrate AI-driven recommendations.
Before extending the DMA to AI, the EU needs reason to believe that doing so would do more good than harm. Yet, as ITIF has repeatedly highlighted, the DMA has not delivered resounding benefits to consumers. Instead, European Internet users are already experiencing a less integrated Google and intrusive choice screens on Apple and Android devices, for example. Additionally, the DMA is being used to grant third parties free access to costly interoperability services, threatening the security and simplicity of Apple’s ecosystem.
Moreover, AI is a nascent industry with no demonstrated market failure that would warrant DMA intervention. The AI industry is highly competitive, with numerous companies vying for market share worldwide. Among them, rapidly scaling European AI startups such as Aleph Alpha in Germany and Mistral AI in France exemplify the market’s dynamism. The DMA’s rigid, one-size-fits-all approach is also ill-suited for AI, an industry that thrives on adaptability and rapid iteration. Indeed, European leaders cannot proclaim their ambition to win the AI race, as they did at the Paris AI Summit, and in the same breath push for the world’s most restrictive AI regulations.
In addition, even where market failures exist, competition enforcement should be tried before regulation is implemented. Indeed, unlike industries such as search and smartphone—where competition law has been enforced for years—no competition cases have even been brought against AI and cloud services. Before introducing new ex ante regulations, the EU should at least attempt to apply existing competition rules. To be clear, ex post enforcement should not serve as a free pass to aggressively prosecute AI companies, especially with large fines. Rather, prematurely forgoing flexible ex post enforcement in favor of DMA regulation amounts to treating a developing sector as if it were a failed market unsalvageable by competition law enforcement.
Furthermore, the EU’s AI Act, which entered into force in August 2024, introduced its first prohibitions on AI use and deployment in early 2025, with full enforcement slated to phase in over the next two years. Notably, Article 29 of the AI Act already includes a provision preventing the use of AI for “unfair commercial practices.” Adding DMA restrictions to these existing regulations would create unnecessary and excessive regulatory overlap, complicating compliance for companies already navigating Europe’s numerous digital regulation regimes. Even Henna Virkkunen, Europe’s Commissioner for Technology Sovereignty, Security, and Democracy, acknowledged the issue, stating in reference to the AI industry, “We also have to look at our rules, that we have too much overlapping regulation.” In fact, the European Commission recently moved in the right direction by withdrawing the AI Liability Directive, suggesting that further AI regulation would be a step in the wrong direction.
Rather than imposing more rules, Brussels should address competition concerns in AI with a light-touch approach. The first and most crucial step in this strategy is to wait and observe how AI markets develop. In these dynamic and disruptive industries, competition left to its own devices often solves many problems. And, if and when concerns about anticompetitive conduct emerge, the EU should apply its robust competition law regime. Indeed, just as the European Commission used its competition law tools for fifteen years in the digital sector before deeming them ineffective, the Commission should not preemptively resort to regulating AI before these alternatives have been thoroughly attempted.
Europe has already fallen behind in digital innovation; it cannot afford to make the same mistake with AI. In the fiercely competitive, fast-moving AI race, heavy-handed regulations like the DMA could cripple the EU’s nascent AI sector and hand a decisive advantage to Chinese startups like DeepSeek and Manus. Meanwhile, the Trump administration has pledged to combat bills like the DMA, which it classifies as undue “Digital Service Taxes.” Expanding the DMA to AI will only escalate tensions, especially given the administration’s commitment to strengthening U.S. leadership in AI. With U.S.-EU relations already strained, the European Commission must act prudently in this critical area to drive innovation, preserve transatlantic ties, and ensure the West’s leadership in AI, particularly in the face of China's growing influence.