Ten Principles for Regulation That Does Not Harm AI Innovation
Concerns about artificial intelligence have prompted policymakers to propose a variety of laws and regulations to create “responsible AI.” Unfortunately, many proposals would likely harm AI innovation because few have considered what “responsible regulation of AI” entails.
Artificial intelligence (AI) has the potential to create many significant economic and social benefits. However, concerns about the technology have prompted policymakers to propose a variety of laws and regulations to create “responsible AI.” Unfortunately, many proposals would likely harm AI innovation because few have considered what “responsible regulation of AI” entails. This report offers ten principles to guide policymakers in crafting and evaluating regulatory proposals for AI that do not harm innovation.
As AI continues to improve, opportunities to use the technology to increase productivity and quality of life will flourish across many sectors of the economy, including health care, education, transportation, and more. In response, policymakers have proposed a variety of regulations to address concerns that this coming wave of AI systems may cause harm. Minimizing potential harm from AI systems is an important goal, but so too is maximizing the potential benefits of AI systems. Implementing many of these proposals, especially in their current form, is likely to have serious consequences because many of AI’s potential benefits—including opportunities to use the technology both to save lives and to improve living standards—may be delayed or denied with poorly crafted regulations. Policymakers want AI systems that do not cause harm, but they have not mastered the art of creating regulations that do not harm AI innovation. If policymakers decide that regulation is necessary, then to avoid slowing AI innovation and adoption, they should follow these 10 principles:
- Avoid pro-human biases.
- Regulate performance, not process.
- Regulate sectors, not technologies.
- Avoid AI myopia.
- Define AI precisely.
- Enforce existing rules.
- Ensure benefits outweigh costs.
- Optimize regulations.
- Treat firms equally.
- Seek expertise.