The European Commission published a draft of its proposed Artificial Intelligence Act (AIA) in April 2021. If adopted, the AIA will be the world’s most restrictive regulation of artificial intelligence (AI) tools. It will not only limit AI development and use in Europe but impose significant costs on EU businesses and consumers.
The AIA is a horizontal law that will apply to any product that contains artificial intelligence (AI). It attempts to sort AI into three categories (prohibited, high-risk, and limited risk). The AIA bans AI for subliminal psychological manipulation, and it prohibits public authorities from using real-time biometric surveillance and building AI-powered social scoring systems. The use of AI in ways that affect the fundamental rights or the safety of users is considered high-risk. This includes using AI in critical infrastructure, educational and vocational training, employment or worker management, essential public and private services (including access to financial services), law enforcement, border control and migration, and the administration of justice. AI in these many contexts generates a broad swathe of legal obligations for developers and users. The AIA requires high-risk AI to be:
- trained on datasets that are complete, representative, and free of errors;
- implemented on traceable and auditable systems in a transparent manner;
- subject to human oversight at all times; and
- robust, accurate, and secure.
Operators of high-risk AI systems have to abide by numerous technical and compliance features before and after they take their AI tool to market. They must:
- Build a quality management system.
- Maintain detailed technical documentation.
- Conduct an assessment to ensure the system conforms to the AIA.
- Register the system in an EU database.
- Monitor the system once it is on the market.
- Update the documentation and conformity assessment if substantial changes are made.
- Collaborate with market surveillance authorities.
The Commission has stated that it wants 75 percent of European businesses to use AI by the end of this decade. Estimates of current AI adoption vary, but this could amount to as much as a tenfold increase from current levels. Since the AIA’s list of high-risk AI uses is long, and fines for non-compliance are high, it is to be expected that the AIA’s lengthy set of requirements will come at a steep price. The point of the AIA is to ensure that “high risk” AI in Europe is covered by a detailed and broad set of legal requirements. These requirements come at a significant cost to developers and deployers of AI.
The European Commission has released an impact assessment of the AIA. Using this study and further analyses, the Center for Data Innovation estimates the financial cost that the AIA will impose on Europe’s economy.
This analysis excludes the additional unquantifiable costs that the AIA imposes—deterring investment into European AI startups, slowing down the digitization of the economy, and encouraging a brain drain of European entrepreneurs to countries where they can build AI companies with fewer bureaucratic hurdles than they face at home.