No, AI Is Not a Surveillance Technology
At the TechCrunch Disrupt conference this year, Meredith Whittaker, president of the encrypted messaging app Signal and long-time critic of large tech companies, made headlines when she declared that “AI is a surveillance technology.” Her message was not exactly original—many others have made the same dubious claim—but it shows how privacy activists have set their sights on AI and begun to falsely portray the technology as an invasion of privacy.
To understand why privacy activists are calling AI a “surveillance technology,” it is necessary to understand the history of this phrase. It is an invented term, coined by privacy advocates as a way of denigrating certain electronic products and services, especially those used by law enforcement and the intelligence community to monitor individuals. But the definition is so broad that it is almost meaningless. For example, the American Civil Liberties Union (ACLU) defines surveillance technology as:
any electronic surveillance device, hardware, or software that is capable of collecting, capturing, recording, retaining, processing, intercepting, analyzing, monitoring, or sharing audio, visual, digital, location, thermal, biometric, or similar information or communications specifically associated with, or capable of being associated with, any specific individual or group; or any system, device, or vehicle that is equipped with an electronic surveillance device, hardware, or software.
By this definition, almost any modern digital device—including digital cameras, smart phones, laptops, routers, and televisions—is a surveillance technology simply because it processes data. Indeed, the ACLU implicitly acknowledges this problem with its definition by offering a list of technologies that meet the above definition but that it does not recommend policymakers include in legislation, such as printers, email systems, and audio recorders. Thus the ACLU’s definition becomes purely subjective and effectively amounts to a list of technologies that privacy activists oppose, such as automatic license plate readers, facial recognition systems, RFID scanners, body-worn cameras, and gunshot detection systems.
Naturally, there are certain technologies used routinely as part of surveillance, from binoculars and cameras to GPS trackers and hidden microphones, but many of these technologies have both surveillance and non-surveillance uses. For example, sports fans might use binoculars to watch a game while outdoors enthusiasts might use them to watch wildlife, so labeling binoculars a “surveillance technology” is misleading at best. However, there are clearly some technologies, such as wiretaps, where their primary purpose is surveillance.
The primary purpose of AI, however, is not surveillance. AI is fundamentally about creating computer systems that can perform tasks that would typically require human intelligence, such as making predictions, interpreting data, and interacting with people and other machines. Consider some of the leading use cases for AI. In health care, AI can analyze medical images to detect tumors or predict a new drug’s efficacy and toxicity based on its chemical compounds. In agriculture, AI can optimize crop yields based on weather forecasts and detect issues such as pests and disease. And in manufacturing, AI can turbo-charge assembly lines using robotics and reduce downtime with predictive maintenance.
The widespread potential benefits of AI are well-documented, so why are privacy activists making these disingenuous claims about AI being fundamentally a surveillance technology? There are likely a few reasons. First, many of their objections have less to do with AI and more to do with long-standing opposition to various applications, such as facial recognition, surveillance cameras, and predictive policing. For the most part, they have generally lost past debates over these technologies, but by labeling them AI while policymakers around the world are considering new rules for AI, they have another chance to ban or curtail these technologies.
Second, they are attempting to link AI with “surveillance capitalism,” a term used by privacy activists to describe the alleged threats to individuals and society from businesses monetizing the collection and use of personal data. Indeed, Whittaker made this point explicitly at the TechCrunch conference when she said, “AI is a way to entrench and expand the surveillance business model.” But again, this represents a superficial view of AI’s potential, given that many important applications—even from the latest round of generative AI models, such as generating computer code or digital images—have nothing to do with collecting personal data.
Finally, privacy activists are using fear-mongering terminology as a preemptive strike against a technology that will likely diminish support for their anti-tech agenda. After all, their longstanding claim that consumers have gotten a bad deal as big tech companies gobbled up data—a clear myth given that consumers value the free services they receive much more than the data they share—has become even more dubious as the latest round of AI tools have generated widespread public adoption. In addition, big tech companies have created enormous public value from their investments in AI. Indeed, Meta, one of the most frequent targets of privacy activists, has even made its Llama AI models freely available for public use.
Privacy activists are likely to continue unleashing a torrent of criticism against AI, not because the technology presents substantial new privacy risks but because it is the only way that these groups can stay relevant in a fast-moving policy environment. Indeed, ITIF’s past work documenting the tech panic cycle predicts exactly this behavior. While policymakers should remain attentive to addressing potential harms associated with emerging technologies, they should continue to treat AI as a general-purpose technology and focus on maximizing its many beneficial applications.