WASHINGTON—Policy debates around artificial intelligence (AI) are dividing into two positions: those that want to enable innovation, and those that want to slow or stop it. A new report, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” released today by the Information Technology and Innovation Foundation (ITIF), the world’s leading think tank for science and technology policy, examines the differences between these approaches—the innovation principle and the precautionary principle—and offers policy recommendations to spur the development and adoption of AI, rather than unnecessarily hinder it.
“Some current and proposed policies are too focused on preventing hypothetical worst-case scenarios, and if enacted would slow AI innovation and adoption,” said ITIF Vice President Daniel Castro, co-author of the report. “AI stands to bring economic growth, social progress, and global competitiveness. We should not let speculative concerns hold back the concrete benefits AI can bring.”
The precautionary principle is the idea that if a technological innovation may carry a risk of harm, then those proposing the technology should bear the burden of proving it will not; if they cannot, governments should limit the use of the new technology until proven safe. In short, the precautionary principle advocates for governments to adhere to the cliché it is “better to be safe than sorry.” In contrast, the innovation principle is the idea that most technological innovations benefit society and pose only modest risks, so government’s role should be to pave the way for widespread innovation while building guardrails, where necessary, to limit harms.
According to the report, policies guided by the precautionary principle treat AI in one of three ways: 1) as too dangerous to allow; 2) too dangerous unless proven safe; or 3) too dangerous without strict regulatory interventions.
The report outlines 10 ways that policies based on the precautionary principle undermine AI, including by:
- Making AI development slower and more expensive;
- Enabling less innovation;
- Reducing the quality of AI;
- Discouraging AI adoption;
- Fostering less economic growth;
- Providing fewer options for consumers;
- Raising prices;
- Resulting in inferior consumer experiences;
- Seeing fewer positive social impacts; and
- Reducing economic competitiveness and national security, particularly for nations competing against China for global influence.
To embrace the innovation principle, the report recommends that governments adopt algorithmic accountability, encourage pilot programs, aid transitioning workers, ensure standards for acceptable performance for AI and humans are the same, and address concerns by sector.
“America’s embrace of innovation paved the way for its early growth and longstanding leadership in the digital economy,” said ITIF Research Assistant Michael McLaughlin, co-author of the report. “If policymakers want their nations to achieve the full benefits of AI, they shouldn’t let fear of new technology limit, delay, and constrain its progress.”