ITIF Logo
ITIF Search
Virginia’s AI Bill Is a Misfire

Virginia’s AI Bill Is a Misfire

March 24, 2025

Virginia Governor Glenn Youngkin will decide on Monday, March 24, 2025, whether to sign or veto the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094). The bill is meant to mitigate AI bias, but its rules are inconsistent, its enforcement is unworkable, and it mistakes equal treatment for fairness. Signing it would be a mistake and add to a growing patchwork of state laws that will create costly confusion without improving outcomes.

HB 2094 would regulate “high-risk” AI systems, defined as those that make or significantly influence consequential decisions about people’s lives, such as in housing, employment, education, healthcare, lending, parole, and legal services. It places requirements on both developers, who must document the system’s intended use, risks, and performance, and deployers, who must create risk management policies, conduct detailed impact assessments, and notify individuals when AI is used in decisions about them. If a decision enabled by AI harms someone, deployers must offer an explanation and a chance to appeal. The Attorney General is responsible for enforcing these rules, with civil penalties for violations.

The fundamental problem with the bill is the misconceived idea it’s built on. The bill’s goal is to ensure organizations treat everyone the same, but it doesn’t address whether that treatment is equally bad. Take a landlord using AI to screen tenants. Under the proposed law, the AI system just needs to avoid discriminating between, say, Black and white applicants, but it doesn’t have to make accurate or high-quality decisions for either group. In other words, if the system unfairly denies rental opportunities, it meets the law’s standard as long as it does so in the same flawed way for everyone. That might technically count as equality, but it isn’t fairness for anyone.

The bill also draws seemingly arbitrary lines around which organizations have to comply. It exempts banks, insurers, and some healthcare providers if they’re already covered by sector-specific nondiscrimination regulations, but would rope in landlords, schools, and employers, even though they’re governed by their own civil rights and consumer protection laws. To see how incoherent this is, consider that an insurer who uses AI to determine who gets a home loan would likely not be subject to the bill’s requirements but a housing association using AI to screen tenants would be. Both decisions affect access to housing and both sectors are already subject to anti-discrimination oversight. Drawing a bright line between them makes no sense.

If Virginia wants to meaningfully advance fairness, it should focus on the areas where state-level action can genuinely help. Instead of settling for systems that merely distribute errors evenly, Virginia should require that any high-risk AI system used by state agencies meet robust performance metrics, such as accuracy and error rates broken down by age, race, and gender. Clear performance standards would drive better outcomes across the board and ensure that taxpayer dollars aren’t wasted on tools that simply propagate flawed decision-making.

The state government should not, however, be setting pre-deployment performance standards for the AI systems that non-government organizations use. The National Institute of Technology and Standards (NIST) has already been working on the complex task of figuring out which types of AI systems need to be evaluated and informing sector-specific federal regulators on how best to evaluate them, a task that requires deep technical expertise and coordination across the entire AI private sector. This isn’t just complex work, it requires a level of scale, alignment, and national consistency that only the federal government can provide.

Granted, states might argue since NIST’s AI work is being hollowed out, they have no choice but to step in to ensure AI works for people. But they should recognize that while their intentions may be laudable, in practice they would only make things worse. The Virginia bill (like the Texas bill) wrongly assumes that transparency alone will lead to meaningful accountability. While transparency can be a useful tool, the way it’s done in the bill only serves to entrench flawed systems by creating the illusion of oversight without delivering any.

The bill requires deployers to conduct detailed impact assessments before launching any high-risk AI system, and again after any significant update. These reports must spell out the system’s purpose, benefits, inputs and outputs, limitations, risks, mitigation steps, monitoring plans, and accuracy metrics. All of it is handed to the Attorney General’s office, which is then expected to make sense of it, oversee compliance, and take action if needed. But that office simply isn’t equipped for the job—neither in resources nor expertise. The result will be a box-ticking exercise where companies flood the system with paperwork no one truly interrogates.

Virginia should get AI right. HB 2094 is a confused and unworkable bill that risks entrenching flawed systems under the banner of fairness. Governor Youngkin should veto it, not to halt progress, but to insist on a version that actually delivers it.

Back to Top