ITIF Logo
ITIF Search

Response to the Public Consultation for the European Commission’s White Paper on a European Approach to Artificial Intelligence

June 12, 2020

The white paper’s introduction mentions the fierce global competition for AI advantage, one that it wants to be based on European values, yet it fails to recognize the likelihood that a new restrictive conformity assessment framework is likely to further undermine the EU’s position. Europe is already struggling in this race. As the Center for Data Innovation’s report Who Is Winning the AI Race: China, the EU or the United States? shows, the United States leads the global race for AI, with China in second, and the EU lagging behind. At the heart of this race is the ability of people and firms to engage in data-driven innovation. Yet, similar to the General Data Protection Regulation (GDPR), the proposed AI conformity assessment framework imposes a constraint on the use of new AI-based technologies that will be developed in significant part by non-Europeans, rather than focusing on supporting the actual development of data-driven innovation. In contrast to Europe, China has created a vast, protected domestic market and extensive government support mechanisms, including a concerted effort to help its tech firms and their products and standards go global. China’s efforts to influence global standards builds on its firms’ ability to develop these new technologies, not the other way around. The same for the United States.

The whitepaper’s central problem is twofold. First, the EC is rushing to apply the precautionary principle—the idea that innovations must be proven safe before they are deployed—based on the widespread but incorrect beliefs that there is something inherently suspect about the technology, that organizations will have strong incentives to use the technology in ways that harm individuals, and that existing laws are insufficient to effectively oversee the use of this technology. Indeed, fears that algorithms could exhibit and exacerbate human bias, including facilitating discrimination and exploitation, have dominated discussions about how policymakers and regulators should treat algorithmic decision-making. But the likelihood of these risks coming to fruition is often overstated, as advocates incorrectly assume market forces would not prevent early errors or flawed systems from reaching widespread deployment.

Moreover, it is early days, as policymakers, academics, and experts from around the world discuss the best approach to the governance of AI. Many proposed solutions are a poor fit, inadequate, and/or ineffective. There may well be a role for some government-designed or approved process to test certain applications of AI in various sectors. Whether conformity assessments can work for AI in a way that relies on the same legal system and testing infrastructure that the EU applies to the product safety testing of physical goods, like toys, raises significant questions regarding practicality, viability, and technical application. For all of these reasons, it’s a mistake for the EC to rush ahead and enact a framework without much more research and extensive, proactive international cooperation.

Which raises the second major problem with the whitepaper: The EC does not seem inclined to recognize that AI creates interdependencies with other countries. This should make cooperation with broadly like-minded partners a necessary prerequisite (not an afterthought or minor component) in terms of developing a regulatory framework that addresses shared policy goals, while supporting each country’s firms’ ability to innovate and trade as part of global production networks and value chains (both of which are increasingly services and digital intensive). The white paper states that the EU “will continue to cooperate with like-minded countries, but also with global players, on AI, based on an approach based on EU rules and values (e.g. supporting upward regulatory convergence, accessing key resources including data, and creating a level playing field).” Yet this is hardly reflected in either the whitepaper or in recent policies.

The whitepaper states that the EC “will closely monitor the policies of third countries that limit data flows and will address undue restrictions in bilateral trade negotiations and through action in the context of the World Trade Organization (WTO).” Even if well-intentioned, an ex-ante conformity assessment framework would do just this.

The proposal, whose design is presumably founded on the EU’s New Legislative Framework and its approach to standardization (outlined in Regulation No. 1025/2012), reinforces the EU’s regional—and not global—approach to standards and conformity assessment in that it advantages its own intra-regional regulatory standards and a select, designated group of European standards bodies, with a secondary, more limited and onerous lane for firms and products that use a body or standard from outside Europe. In addition, for those AI products that require a third-party test, the EU legal framework limits these to designated bodies (“notified bodies”) located in the territory of an EU member state. With respect to localization requirements for testing bodies (i.e., non-recognition of testing reports from international conformity assessment bodies), this is precisely the kind of localization barrier to trade that the EC advocates against in forums like the WTO. Its application to new technology stands to exacerbate its negative impact on trade and interoperability.

Such Europe-specific conformity testing for data-driven applications represents a mechanism for localization and discrimination between local and foreign firms and their digital products. For example, in the context of foreign AI developed by firms in authoritarian countries (presumably China and Russia), Commissioner for the Internal Market Thierry Breton said manufacturers could be forced to “retrain algorithms locally in Europe with European data,” adding that “We could be ready to do this if we believe it is appropriate for our needs and our security.” This is a slippery slope to rush down. The EC should also be aware of the risk that in the future its own firms will likely be affected as other countries copy-and-paste and repurpose the EU’s own rushed approach in enacting their own opaque and arbitrary conformity assessment frameworks for AI. Ultimately, the spread of these frameworks will act as a barrier to the development of a more productive and innovative global digital economy given the central and growing role of AI.

The EC is obviously within its rights to determine what regulations it wants to enact in pursuing its legitimate policy goals, however, as with all domestic regulation and trade issues, this must be proportionate and nondiscriminatory so that it doesn’t act as a barrier to trade. The conformity testing framework will almost certainly reduce trade both in the extensive margin (the decision by exporters to enter a market) and the intensive margin (the quantitative decision of how much to export). Trade policy research shows how different and incompatible regulations across jurisdictions, however slight, can impede trade in goods and services. The time and money firms invest in abiding by differential testing processes can be significant, especially for small and medium-sized firms. Differential regulatory requirements have proven costly with traditional trade in physical goods. Expanding this to digital economic activity (where the distinction between goods, services, and even processes is unclear in the EU’s proposal) creates a whole other realm of potential trade disputes given it involves far more dynamic and complex technologies and assessments.

The proposed institutional framework for administrating this framework is equally problematic in how it outlines a new horizontal regulatory framework will lay on top of respective sectoral regulations and enforcement agencies at the EU level and in each member country. Creating or designating completely new agencies or offices, competencies, and coordination mechanisms is costly and complicated. It also presumes the competency and appropriateness of notified bodies—many of which are private sector entities that have been formally designated by competent member state authorities and the EC—to carry out the assessment of high-risk applications of AI (however this is ultimately defined and applied that looks like). This is exactly the issue that arose in the context of the Medical Devices Regulation/In-vitro Diagnostics Regulation (MDR/IVDR) Roadmap (explained in a case study below), where not only are there insufficient standards, but insufficient EU-based testing capacity. In this way, the whitepaper fails to learn some key lessons from the region’s recent experience in enacting similar new regulatory frameworks.

The EU’s Executive Vice-President Margrethe Vestager stated that an assessment will be made in the future as to whether this approach is effective or not. The EC would be better served to fundamentally reconsider its conformity testing-based approach to regulating AI and instead work with like-minded partners on the best approach to address shared concerns about AI in high-risk sectors. If it does proceed with a conformity assessment framework, the EC should at least consider the international impact from the start, along with details about how it will build mechanisms for regulatory cooperation and interoperability (whether these are government-to-government or global, industry-driven, voluntary consensus standards).

Unfortunately, in this the whitepaper EU disregards careful policy development in rushing to seize what it thinks will provide it a first mover regulatory advantage on digital issues; all to the detriment of its local firms and economies and international trade and the global economy. But Europe shouldn’t focus on being first with new digital rules, it should focus on creating and implementing rules that allow AI-driven businesses and innovations to flourish in Europe, and in other likeminded nations that embrace the principles of rules-governed, enterprise-led, market-based trade. European policies should be designed to enable and promote health and robust competition in digital industries, for doing so will have a powerful effect on promoting European productivity and economic growth. The rush to regulation and implementation, without waiting on international discussions on AI and standards to evolve, indicates that the EU is willing to use AI regulation as a protectionist and expansionist strategy rather than building bridges between common approaches that each address shared public policy interests. Following on from previous regulations such as GDPR, the EU is determined to set a standard to define what “good” AI regulation is, but this strategy risks not achieving the actual objective, while impeding innovation, competitiveness, and trade for itself and
its partners.

The submission analyzes a number of these issues in detail and then provides recommendations, as follows:

  1. It explains how the regulation of AI does not fit well with ex-ante conformity assessment frameworks. It explains how using existing conformity assessment frameworks (for cybersecurity and marketing products) as a model for AI is neither desirable nor fair. To substantiate this, it includes a case study of how the EU’s recent experience with implementing the MDR/IVDR roadmap provides many relevant lessons for the EU as it contemplates a conformity assessment framework for AI.
  2. It looks at how ex-ante conformity tests for AI would become a new non-tariff barrier to digital trade.
  3. It analyzes how limited access for conformity certification is a barrier to market entry, one which the EU and the United States have already had to deal with in other sectors.
  4. It looks how conformity assessments raise the prospect of mandatory source code disclosure, which is another potential barrier to trade.
  5. It provides three main sets of recommendations: one focuses on core issues to consider as part of its policy debate moving forward; a second on the steps to build a truly cooperative and internationally accessible approach to AI regulation; and a third that outlines the need for international cooperation on developing standards for new and emerging technology with trading partners that share its values.
Back to Top