ITIF Logo
ITIF Search

Picking the Right Policy Solutions for AI Concerns

Picking the Right Policy Solutions for AI Concerns
May 20, 2024

Some concerns are legitimate, but others are not. Some require immediate regulatory responses, but many do not. And a few require regulations addressing AI specifically, but most do not.

Introduction

Policymakers find themselves amid a chorus of calls demanding that they act swiftly to address risks from artificial intelligence (AI). Concerns span a spectrum of social and economic issues, from AI displacing workers and fueling misinformation to threatening privacy, fundamental rights, and even human civilization. Some concerns are legitimate, but others are not. Some require immediate regulatory responses, but many do not. And a few require regulations addressing AI specifically, but most do not. Discerning which concerns merit responses and what types of policy action they warrant is necessary to craft targeted, impactful, and effective policies to address the real challenges AI poses while avoiding unnecessary regulatory burdens that will stifle innovation.

This report covers 28 of the prevailing concerns about AI, and for each one, describes the nature of the concern, if and how the concern is unique to AI, and what kind of policy response, if any, is appropriate. To be sure, there are additional concerns that could have been included and others that will be raised in the future, but from a review of the literature on AI and the growing corpus of AI regulatory actions, these are the major concerns that policymakers have to contend with. This report takes 28 of the concerns du jour and groups them into 8 sections: privacy, workforce, society, consumers, markets, catastrophic scenarios, intellectual property, and safety and security. Each concern could warrant a report of its own, but the goal here is to distill the essence of each concern and offer a pragmatic, clear-eyed response.

For each issue, we categorize the appropriate policy response as follows:

Pursue Regulation That Is…

AI-specific: Some concerns about AI are best addressed by enacting or updating regulation that specifically targets AI systems. These regulations may prohibit certain types of AI systems, create or expand regulatory oversight of AI systems, or impose obligations on the developers and operators of AI systems, such as requiring audits, information disclosures, or impact assessments.

General: Some concerns about AI are best addressed by enacting or updating regulation that does not specifically target AI but instead creates broad legal frameworks that apply across various industries and sectors. Examples of these regulations include data privacy laws, political advertising laws, and revenge porn laws.

Pursue Nonregulatory Policies That Are…

AI-specific: Some concerns about AI are best addressed by implementing nonregulatory policies that target AI. Examples of these policies include funding AI research and development or supporting the development and use of AI-specific industry standards.

General: Some concerns about AI are best addressed by implementing nonregulatory policies that do not target AI but instead focus on the broader technological and societal context in which AI systems operate. Examples of these policies include job dislocation policies to mitigate the risks of a more turbulent labor market or policies to improve federal data quality.

No Policy Needed

Some concerns are best addressed by existing policies or by allowing society and markets to adapt over time. Policymakers do not need to implement new regulatory or nonregulatory policies at this time.

Contents

Privacy

  • 1.1. AI may expose PII in a data breach.
  • 1.2. AI may reveal PII included in training data.
  • 1.3. AI may enable government surveillance.
  • 1.4. AI may enable workplace surveillance.
  • 1.5. AI may infer sensitive information.
  • 1.6. AI may help bad actors harass and publicly shame individuals.

Workforce

  • 2.1. AI may cause mass unemployment.
  • 2.2. AI may dislocate blue collar workers.
  • 2.3. AI may dislocate white collar workers.

Society

  • 3.1. AI may have political biases.
  • 3.2. AI may fuel deepfakes in elections.
  • 3.3. AI may manipulate voters.
  • 3.4. AI may fuel unhealthy personal attachments.
  • 3.5. AI may perpetuate discrimination.
  • 3.6. AI may make harmful decisions.

Consumers

  • 4.1. AI may exacerbate surveillance capitalism.

Markets

  • 5.1. AI may enable firms with key inputs to control the market.
  • 5.2. AI may reinforce tech monopolies.

Catastrophic scenarios

  • 6.1. AI may make it easier to build bioweapons.
  • 6.2. AI may create novel biothreats.
  • 6.3. AI may become God-like and “superintelligent.”
  • 6.4. AI may cause energy use to spiral out of control.

Intellectual property

  • 7.1. AI may unlawfully train on copyrighted content.
  • 7.2. AI may create infringing content.
  • 7.3. AI may infringe on publicity rights.

Safety and Security

  • 8.1. AI may enable fraud and identity theft.
  • 8.2. AI may enable cyberattacks.
  • 8.3. AI may create safety risks.

Overview of Policy Needs for AI Concerns

Concerns that warrant AI-specific regulations:

  • 1.3. AI may enable government surveillance.
  • 3.6. AI may make harmful decisions.
  • 8.1. AI may enable fraud and identity theft.
  • 8.3. AI may create safety risks.

Concerns that warrant general regulations:

  • 1.1. AI may expose PII in a data breach.
  • 1.5. AI may infer sensitive information.
  • 1.6. AI may help bad actors harass and publicly shame individuals.
  • 3.2. AI may fuel deepfakes in elections.
  • 6.1. AI may make it easier to build bioweapons.
  • 7.3. AI may infringe on publicity rights.

Concerns that warrant AI-specific nonregulatory policies:

  • 1.4. AI may enable workplace surveillance.
  • 3.3. AI may manipulate voters.
  • 3.5. AI may perpetuate discrimination.
  • 6.2. AI may create novel biothreats.
  • 6.3. AI may become God-like and “superintelligent.”
  • 6.4. AI may cause energy use to spiral out of control.
  • 7.1. AI may unlawfully train on copyrighted content.
  • 8.2. AI may enable cyberattacks.

Concerns that warrant general nonregulatory policies:

  • 1.2. AI may reveal PII included in training data.
  • 2.2. AI may dislocate blue collar workers.
  • 2.3. AI may dislocate white collar workers.
  • 3.1. AI may have political biases.
  • 7.2. AI may create infringing content.

Concerns that do not warrant new policies:

  • 2.1. AI may cause mass unemployment.
  • 3.4. AI may fuel unhealthy personal attachments.
  • 4.1. AI may exacerbate surveillance capitalism.
  • 5.1. AI may enable firms with key inputs to control the market.
  • 5.2. AI may reinforce tech monopolies.

Read the report.

Back to Top