ITIF Logo
ITIF Search

Testimony Before the DC Committee on Government Operations and Facilities on the “Stop Discrimination by Algorithms Act of 2021”

The Council of the District of Columbia’s Committee on Government Operations and Facilities held a public hearing on September 22, 2021, on the proposed “Stop Discrimination by Algorithms Act of 2021” (B24-0558). ITIF Vice President Daniel Castro, director of the Center for Data Innovation, testified:

Chairman White and members of the Committee, thank you for the opportunity to share feedback on the Stop Discrimination by Algorithms Act of 2021. My name is Daniel Castro, and I am director of the Center for Data Innovation, a non-profit think tank studying the intersection of data, technology, and public policy.

While I wholeheartedly agree that policymakers should take steps to reduce discrimination in society, they should do so directly by enforcing and strengthening existing civil rights laws. The proposed law attempts to reduce discrimination indirectly through a broad set of restrictions on the use of algorithmic decision-making. These restrictions risk stifling the use of innovative technologies, particularly the development and adoption of artificial intelligence (AI), hurting both businesses and consumers.

Let me discuss a few of the Act’s key provisions and the challenges they present.

First, the Act would prohibit organizations from using algorithms to discriminate against individuals in certain situations. While well-intentioned, policymakers do not need to enact AI-specific anti-discrimination laws because existing laws already prohibit discrimination. AI is not a “get-out-of-jail-free” card. Using AI does not exempt organizations from adhering to these laws.

Rather than pursue duplicative laws, policymakers should review and clarify how existing anti-discrimination laws apply to AI to ensure organizations comply with both the letter and the spirit of these laws. At the local level policymakers could clarify the DC Human Rights Act and at the federal level this could include federal laws like Title VII of the Civil Rights Act and the Americans with Disabilities Act.

Moreover, if the purpose of the legislation is to prevent discrimination, it should remain narrowly focused on discriminatory actions with adverse effects on individuals, rather than broadly regulating the use of AI for advertising and marketing purposes, which will likely have unintended consequences, such as restricting targeted advertising for coding boot camps for women or faith-based colleges.

Second, the Act would require organizations to disclose how they use personal information in algorithmic decisions. While transparency can help consumers make more informed decisions, consumers should receive the same level of transparency for automated decisions as for non-automated decisions. If policymakers believe that organizations are making decisions about individuals without sufficient notice, then they should apply disclosure requirements to all organizations regardless of whether they are using a computer algorithm or a human process to make decisions.

In addition, the proposed law’s notification requirement for adverse actions is not limited to only decisions based on protected traits, which means a wide array of automated decision could fall under this requirement. For example, a credit card denying a charge that appears fraudulent or an employer rejecting an applicant that does not hold a required credential could all trigger this notification obligation.

Third, the Act would create a requirement for organizations to audit their algorithms for discriminatory impacts and report this information to the attorney general. This provision places an extraordinarily burdensome auditing responsibility not only on those organizations using algorithms for decision-making, but also on service providers who may offer such functionality to others. Many of the auditing requirements would be inappropriate to require service providers to report since they will not necessarily have details about how a particular customer uses their service. Moreover, many businesses and service providers are struggling to comply with the algorithm auditing requirements in New York City, which only apply to AI systems used in hiring. The audit requirements in the proposed Act would apply to a much broader set of activity and present even more challenges.

Fourth, the Act would authorize both the attorney general and individuals to bring civil action against anyone in violation of the law. Creating a private right of action is particularly problematic because it would likely open a floodgate of frivolous lawsuits. Other jurisdictions that have created similar private rights of action, such as the Illinois Biometrics Information Privacy Act, have imposed substantial costs on organizations that are eventually passed on to consumers.

Overall, the legislation is well-intentioned, but ultimately misguided. By imposing a different anti-discrimination standard on organizations that use AI, adding additional compliance burdens, and exposing these organizations to more liability, this law would effectively discourage organizations from using AI. Given that AI provides many opportunities for organizations to reduce costs through automation, by discouraging the use of AI, this law will keep consumer prices higher than they could be. Moreover, given that AI can help organizations improve the accuracy of their decisions, as well as reduce human bias in decision-making, this law will likely result in more consumers being denied access to the various “important life opportunities” policymakers are trying to protect.

In today’s digital economy, organizations increasingly use algorithms to automate certain decisions, such as whether to extend credit to a loan applicant or which job applicants appear most qualified for a position. It is understandable that policymakers want to prevent discrimination in the digital economy, but the best way to achieve that is to strengthen enforcement of anti-discrimination laws not create a regulatory environment that discriminates against the use of algorithms. Moreover, AI offers many opportunities to detect and eliminate human biases, and policymakers should look for more opportunities to use these tools rather than unfairly stigmatizing their use.

Thank you again for the opportunity to share feedback on this legislation. I am happy to answer any questions.

Back to Top