ITIF Logo
ITIF Search
State Department Risks Overlooking Potential of AI For Human Rights

State Department Risks Overlooking Potential of AI For Human Rights

May 29, 2024

President Biden’s 2023 executive order on artificial intelligence (AI) directed the State Department to work with other agencies and stakeholders to develop guidance for identifying and managing human rights risks associated with AI. As the State Department prepares this guidance, it should emphasize that in many cases, the risk of inaction—the missed opportunities to use AI to improve human rights—presents the most significant threat, and it should prioritize deploying AI to support and enhance human rights.

Critics have long argued that digital technology generally, and AI specifically, threatens human rights. For example, Amnesty International warned earlier this year of AI being used for “societal control, mass surveillance, and discrimination.” And the Freedom Online Coalition, a group of 32 countries working together to support Internet freedom, earlier issued a joint statement arguing that “states should consider how domestic legislation, regulation, and policies can identify, prevent, and mitigate risks to human rights posed by the design, development, and use of AI systems, and take action where appropriate.” In other words, they want policymakers to weigh in on the design and use of AI to prevent governments from using the technology for repressive and authoritarian purposes.

What might this look like? The State Department issued guidance in 2020 to U.S. businesses on how “to prevent their products or services with surveillance capabilities from being misused by government end-users to commit human rights abuses.” These guidelines were voluntary but applied to a broad set of tech companies with products involving data analytics, recording devices, sensors, or biometrics. While some of the recommendations reflect commonsense wisdom, such as not selling products to a foreign government with a history of human rights violations, others called for more intrusive technical changes to products, such as implementing kill switches to “terminate access if necessary” to their products. More recently, a similar idea of establishing kill switches to remotely deactivate AI chips used inappropriately by foreign entities has gained some currency in policy circles. Most customers are likely to bristle at these technical measures, as it puts them at the mercy of their supplier, so these types of controls will undermine the competitiveness of U.S. businesses.

Ultimately, trying to keep AI out of the hands of repressive governments is going to be a fool’s errand as the technology becomes more widely available. Indeed, encryption provides a useful parallel. Encryption allows repressive governments to coordinate and hide human rights violations under a veil of secrecy. Yet the same technology is also widely used by dissidents, journalists, and ordinary citizens to communicate privately, protecting free speech. Indeed, the State Department has funded projects to develop technology to increase access to encrypted communications. While there may be some steps businesses can take to deter bad actors from misusing their AI tools, holding countries accountable for human rights violations, regardless of whether they involve AI, will be the primary deterrent to abuse.

The State Department should instead prioritize opportunities to use AI as a human-rights enhancing technology. Some policymakers already pay lip service to the idea that AI can improve people’s wellbeing, but they do not recognize its potential to support or enhance specific human rights. For example, AI analysis of satellite imagery can help identify instances of forced labor, supporting efforts to ensure no one is held in slavery. AI can enable affordable, inclusive, and personalized learning, enhancing people’s right to education. AI can reduce dangerous, dull, and dirty jobs, supporting people’s right to rest and leisure. And AI can replace biased human decisions with less biased automated ones, protecting individuals from discrimination. In each of these cases, AI can strengthen human rights.

The National Institute of Standards and Technology (NIST) has produced an AI Risk Management Framework (RMF) that organizations can use to identify and respond to potential risks from AI. Notably, the AI RMF repeatedly emphasizes that organizations should consider both potential benefits and potential harms in their analysis. It is critical that the State Department incorporate this thinking in addressing human rights, particularly because of the risk of inaction. For example, decisions which delay the deployment of AI could stall the development of beneficial new medicines or lower cost health care interventions. Without this analysis these costs, which can be substantial and are very real, will remain invisible.

Widespread adoption and use of AI is not a given, especially of applications that support and enhance human rights. With its mandate from the executive order, the State Department has an opportunity to chart a new path on AI for human rights. Its goal should be to not only prevent misuse of the technology in ways that hurt human rights, but also identify and accelerate opportunities to use AI to foster human rights globally.

Back to Top