ITIF Search
OMB Should Help Create Standard Contractual Terms to Streamline the U.S. Government Procuring AI

OMB Should Help Create Standard Contractual Terms to Streamline the U.S. Government Procuring AI

June 18, 2024

Federal agencies in the United States spent almost $3.3 billion on AI contracts with the private sector in 2022, a 2.5 times increase from spending in 2017. Unfortunately, procuring IT systems from the private sector is time-consuming and limited to established vendors. As the Biden administration seeks to increase the responsible use of AI in the federal government, it will need to find ways to streamline contracting for AI services, especially for startups not familiar with all the red tape associated with government procurement. One step the Office of Management and Budget (OMB) should take is developing voluntary standard terms for AI contracts to make procurement more efficient and expand federal contract access to a diverse and large pool of vendors, ensuring agencies can access the best systems.

First, creating standard clauses would help make it easier for contracting parties to reach agreements. Currently, federal agencies have trouble making contracts for AI systems with vendors because there aren’t common definitions for the basic key terms in licensing agreements, as a 2022 Global Partnership on AI report notes. The report rightly recommends establishing standard definitions for terms that would be commonly used in these contracts like input data, processed data, untrained models, and trained models to facilitate clearer contracting agreements.

Second, having a standardized set of procurement clauses covering key considerations like data quality, testing, and documentation requirements would help agencies create contracts that align with federal requirements more efficiently. For example, federal agencies must conduct and document regular impact assessments throughout an AI system’s life cycle, test AI systems in domain-specific contexts, and have an independent oversight body evaluate whether the AI systems work as intended, among other things. OMB should create standard clauses that align with U.S. requirements, just as the EU is doing for the requirements of the AI Act. The European Commission has partnered with legal experts to create standard AI procurement clauses in line with the EU’s AI Act that public organizations may use to contract with AI vendors.

Third, creating standard contract terms would avoid defaulting to European ones. The clauses the EU is creating reflect EU priorities not American ones. For instance, one clause ensures that the datasets used in developing AI systems are relevant, representative, error-free, and complete because that is a requirement of the EU AI Act. But the United States does not have these requirements, and in many cases, providing error-free or complete data is not feasible, practical, or necessary. Creating model contracts for AI systems used by the federal government will avoid inadvertently importing EU rules.

Fourth, developing standard clauses for AI contracts would make the contracting process more accessible for vendors, especially small and medium sized businesses. Most federal contracts for AI services are awarded to companies concentrated on the East Coast, close to where federal agencies are located, despite many top U.S.-based AI firms being based on the West Coast. According to a 2021 report from the Government Services Administration (GSA), approximately 87 percent of the federal contracts awarded for robotic process automation went to companies in Virginia and New York. Companies located close to federal agencies are more likely to be those already familiar with the intricacies of contracting with the federal government. That doesn’t mean they necessarily have the best technology, just that they better understand how to navigate the notoriously complex procurement process. The larger and more diverse the pool of vendors that federal agencies can contract with, the better access they will have to the best systems, including those that mitigate risks to privacy, civil rights, and civil liberties.

Developing effective standard terms requires a collaborative, multi-stakeholder process to ensure clauses are practical and technically feasible for all parties, increasing the likelihood of adoption and effective implementation. The EU government is supporting external experts in peer reviewing draft clauses through public workshops and requests for comments; OMB should follow suit for any standard clauses it helps create for the U.S. federal government.

Finally, it is important to note that there will likely be a wide range of different arrangements and use cases for AI that federal agencies will be engaging vendors in. The goal should not be to create one-size-fits-all contract language, but rather a menu of different provisions that federal agencies can use.

Back to Top