Comments to the Senate Subcommittee on Consumer Protection, Product Safety and Data Security on “The Need for Transparency in AI”
The Center for Data Innovation has submitted written testimony to the Senate Subcommittee on Consumer Protection, Product Safety and Data Security on crafting policies to increase transparency in artificial intelligence (AI) technologies for consumers. In this statement, we offer three considerations policymakers should keep in mind to ensure consumers are protected from harm:
- While policymakers should encourage companies to adopt the NIST risk management framework, they should recognize that it is not a silver bullet for trustworthy AI. There are a variety of technical and procedural controls companies can employ to mitigate harm and policymakers should encourage companies to explore the full gamut of mechanisms to find those most contextually relevant.
- Because increasing AI transparency can make some systems less accurate and effective, policymakers should fund research to better understand this tradeoff and evaluate policies for transparency against the impact on system accuracy.
- Policymakers should hold AI systems to the same standard as human decisions, which are not always transparent.
- Policymakers should direct NIST to support work on content provenance mechanisms, which are techniques that help users establish the origin and source of content (both AI-generated and human-generated), rather than create policies that simply require systems to disclose when output is AI-generated.
AI offers significant societal and economic benefits in a wide variety of sectors. The biggest risk to consumers is that the myriad opportunities AI offers will not be translated into all the areas where they can make a positive difference in people’s lives. However, there are several other areas of risk to consumers from businesses using AI. One is the creation of unsafe AI products and services, such as a company putting an AI chatbot that advises users to do dangerous things on the market. Another is the use of AI to deceive or manipulate unsuspecting consumers, such as a company using AI to create and spread fake reviews about their goods or services, which ITIF’s Center for Data Innovation explores in its 2022 report “How Policymakers Can Thwart the Rise of Fake Reviews.” A third is the use of AI to commit crimes that harm consumers, such as using AI to support cyberattacks that steal their sensitive information. While there are other applications of AI that interact with consumers, such as the use of AI to make lending or credit decisions or AI used in employment decisions, we note that these are not in the scope of the subcommittee and therefore keep our comments focused on those that are.