The Techlash Has Made It So Big Tech Can Do No Right
Last week saw a flurry of criticism leveled against the most prominent tech companies… which made it just like every other week. The reflexive tendency to critique tech companies has become so ingrained and the need to one-up past denouncements has become so absurd that the entire ritual has become almost farcical. And yet the ongoing “techlash”—the undercurrent of animus and suspicion that pervades discussions involving large tech companies and their technology—has created an environment in which policymakers, the media, and the public take these complaints seriously, often without any serious interrogation of their validity.
Consider the claims that served as plot lines in last week’s three biggest dramas.
Claim #1: Apple Is Engaging in Mass Surveillance
Apple announced that it is launching a new method to detect child sexual abuse material (CSAM) in images users upload to iCloud, its cloud-based storage service, in the next version of its operating system. Before users upload images from their devices, Apple will check to see if they match a database of known CSAM image hashes provided by the National Center for Missing and Exploited Children. Apple designed the system to allow for a few false positives, but if the amount of flagged content exceeds a certain threshold, Apple will review the report, and if it confirms a match, disable and report the user’s account to the appropriate authorities.
Targeting individuals who upload CSAM images from their Apple devices while preserving privacy for everyone else should not be controversial, but critics quickly slammed the company. Ross Anderson at the University of Cambridge called it “an absolutely appalling idea, because it is going to lead to distributed bulk surveillance.” Alec Muffett, a privacy activist, labeled it a “huge and regressive step for individual privacy.” And Matthew Green at Johns Hopkins University derided Apple as “basically capitulating to the worst possible demands of every government.”
These are bold and outrageous statements. Virtually all major consumer-facing cloud services scan images users upload for CSAM images, so the real announcement here is that Apple has built a way to scan for these images while better protecting user privacy. That’s a good thing. Calling this mass surveillance is not only completely misleading, it also suggests that any technical control a company might implement to limit users from sharing illegal materials amounts to an inappropriate violation of user trust and privacy.
Claim #2: Facebook Is Censoring Academics
Facebook announced that it disabled access to its platform for three New York University (NYU) researchers who refused to comply with its terms of service. The NYU researchers had built a browser extension to scrape Facebook data to monitor ads, and Facebook had repeatedly warned the researchers since last summer that their software violated its policies. After the NYU team refused to modify its approach or adapt its research to use one of Facebook’s authorized data sources, the company finally booted them.
Critics immediately slammed the company. The NYU researchers claimed Facebook was “taking measures to silence us,” while Sen. Mark Warner (D-VA) labeled the actions “deeply concerning” and Sen. Ron Wyden (D-OR) lambasted the company for what he called efforts to “crack down on researchers.”
Yet this outrage is entirely misplaced. While there is nothing wrong with researchers wanting to have access to more data, just because they want it does not mean that they should break the rules to get it. After all, it was only a few years ago that Facebook faced global scorn for failing to realize the adverse implications of an academic harvesting user data in the Cambridge Analytica incident. Having learned a painful lesson on blindly trusting academics, it is completely reasonable for Facebook to cut off another group of academics who defiantly refused to adhere to its terms of service. To ignore it would not only expose the company to scorn and condemnation, but also potential liability.
Claim #3: Big Tech Is Bullying the FTC
A group of Democratic members of Congress—Sens. Elizabeth Warren (D-MA), Richard Blumenthal (D-CT), and Cory Booker (D-NJ), and Rep. Pramila Jayapal (D-WA)—sent a letter to the CEOs of Amazon and Facebook demanding that they withdraw their requests that FTC Chair Lina Khan recuse herself from antitrust matters involving their businesses. According to these lawmakers, their requests, which are currently in the hands of the FTC, are examples of big tech trying to “bully your regulators” and “discredit Chair Khan on ethics grounds.”
The rationale for asking for Chair Khan’s recusal is straightforward: Prior to her appointment to the FTC, she had made repeated public statements that she believes these companies have violated antitrust laws and therefore has appeared to prejudge the outcome. Amazon, for example, notes in its petition that “a reasonable observer would conclude that she no longer can consider the company’s antitrust defenses with an open mind.”
Whether or not you believe it is necessary for Chair Khan to recuse herself from these proceedings to ensure due process, it is certainly within these companies’ rights to make that request. Anyone interested in seeing a fair legal process should welcome that review, especially if they believe the facts are on their side. Furthermore, it is laughable to suggest that the companies hold all the power in this situation. They have little immediate control over whether she chooses to recuse herself, they do not know what steps the FTC might take in the near future—and, if anything, they risk further antagonizing Chair Khan to their peril.
How Critics Become a Mob
Skimming last week’s headlines, most readers would come away thinking tech companies are systematically surveilling the planet, undermining crucial research, and abusing government officials. But moving past the heated rhetoric a different picture emerges, and whether Big Tech’s critics want to admit it or not, it is not nearly so damning. A more objective account would say that Apple took steps to stop child sexual abuse while protecting consumer privacy, Facebook enforced its rules to ensure academics do not misuse user data, and Amazon and Facebook called on a federal agency to act ethically.
Those headlines would be less exciting, and since bad news sells, nobody should expect media coverage to change anytime soon. But it is worth remembering that when something is routinely vilified, it is often hard to see it as anything but the villain. No company is perfect, and criticism is sometimes warranted, but when detractors become so blinded by their hostility that they cannot even acknowledge when these companies do something right, their critiques cease to be part of a productive debate and instead take on the character of an angry mob.