ITIF Logo
ITIF Search

A Policymaker’s Guide to the “Techlash” —What It Is and Why It’s a Threat to Growth and Progress

Growing animus toward “Big Tech” companies and generalized opposition to technological innovation engenders support for policies that are expressly designed to inhibit it. That is deeply problematic for future progress, prosperity, and competitiveness.

KEY TAKEAWAYS

The “techlash” phenomenon refers to a growing animus toward large technology companies (a.k.a., “Big Tech”) and to a more generalized opposition to modern technology itself, particularly innovations driven by information technology.
As the techlash has gained momentum, there has been rising support for policies expressly designed to slow the pace of innovation, including bans, taxes, and stringent regulations on certain technologies.
The concerns being raised about technology today are not all frivolous or without merit. But overall, succumbing to techlash is likely to reduce individual and societal welfare.
Policymakers should resist techlash and embrace pragmatic “tech realism”—recognizing technology is a fundamental force for human progress that also can pose real challenges, which deserve smart, thoroughly considered, and effective responses.

Key Takeaways

Contents

Key Takeaways. 1

Introduction. 2

What Is the Techlash and Where Did It Come From?. 2

Why Techlash Matters. 6

22 Techlash Issues. 8

Getting to a New Acceptance: Not Tech as Savior, Not Tech as Enemy, but Tech as a Valuable Tool 37

Endnotes. 38

Introduction

Does information technology (IT) solve problems and make our lives easier, allowing us to do more with less? Or does it introduce additional complexity to our lives, isolate us from each other, threaten privacy, destroy jobs, and generate an array of other harms? As Microsoft President Brad Smith has asked in his new book, is technology a tool or a weapon?[1] Until quite recently, the answer for most people would have been the former—that it is a valuable tool that makes our lives and society better. But in the last several years, views have shifted, particularly among opinion-leading elites who now finger “Big Tech” as the culprit responsible for a vast array of economic and social harms. Termed the “techlash,” this phenomenon refers to a general animus and fear, not just of large technology companies, but of innovations grounded in IT.

While the evidence suggests the public is more comfortable with modern technology than many pundits, activists, and politicians are—consumers still line up to buy the latest iPhones, and they use social media at record levels—the techlash is still, we believe, an important issue. Techlash manifests not just as antipathy toward continued technological innovation, but also as active support for policies that are expressly designed to inhibit it. This trend, which appears to be gaining momentum in Europe and some U.S. cities and states, risks seriously undermining economic growth, competitiveness, and societal progress. Its policies are not rational, but the techlash has created a mob mentality, and the mob is coming for innovation.

What Is the Techlash and Where Did It Come From?

Until recently, IT and the companies that produce IT-driven products and services were largely seen in a positive light. Indeed, media coverage of technology in the 1980s and 1990s was extremely favorable, with a preponderance devoted to the advantages afforded by technological advances.[2] Even as late as 2010, when the Arab Spring uprisings occurred, as protestors used social media to organize, unify, and get their messages out, the Internet was seen as a liberating force. The media referred to other similar events as Iran’s “Twitter Revolution,” Egypt’s “Facebook Revolution,” and Syria’s “YouTube uprising.”[3] In 2010, Time featured Mark Zuckerberg as its “Man of the Year” for connecting people, mapping social relations, creating a new system of exchanging information, and changing how we all live our lives.[4] Netflix was “killing piracy.”[5] And Spotify was a growing and popular start-up, that would let users download and stream songs for free.[6] Google had “amazing people,” and its founding fathers were among the world’s top “tech geniuses.”[7] In 2011, the world mourned the loss of Apple visionary Steve Jobs, who had launched the “magical” smartphone.[8] Amazon was seen as providing more choice and liberating convenience to tens of millions of consumers.[9] Massive open online courses were democratizing education.[10] In short, technologies and Big Tech were catalysts for positive and needed change.[11]

But that optimistic tone has now turned markedly dark, with significantly more attention focused on the purported ill effects of technology: its displacement of face-to-face interactions, role in environmental degradation, threat to employment, and overall failure to live up to some of the more grandiose predictions about its impact.[12] All of this led up to the point where the term “techlash” was runner-up in Oxford Dictionary’s 2018 word of the year. Oxford defines the term as the “strong and widespread negative reaction to the growing power and influence that large technology companies hold.” But this is too narrow. Techlash, in fact, represents something broader: a negative reaction not just to a few large technology companies, but to technology itself, particularly IT. Indeed, the backlash against technologies such as facial recognition, e-scooters, and sidewalk delivery robots is not so much about the size and nature of the companies making them, but a reflection of souring views toward the technologies themselves.

Techlash did not emerge spontaneously. Even during the period when IT was largely seen as a positive, liberating force, there were strong undercurrents of techlash emanating largely from pundits promoting jeremiads—and themselves. Katherine Albrecht warned that radio frequency identification devices (RFIDs) were, as the title of her book indicates, Spychips. Jeremy Rifkin provided the titular warning of the coming End of Work. Evgeny Morozov wrote about The Net Delusion. Nick Carr wrote that Google is “making us stupid” in IT Doesn’t Matter, and Andrew Keen penned The Internet Is Not the Answer. Jaron Lanier asked in his book, Who Owns the Future? (Big Tech does). Susan Crawford warned in her book, Captive Audience, that we are all just that to rapacious “Big Broadband.” Scott Galloway even argued that Big Tech companies are responsible for virtually every economic and social ill facing America, calling it, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine.”[13] These and other pundits worked tirelessly to lay the intellectual groundwork for the techlash. They were answering often overly utopian claims about IT with distinctly dystopian counterclaims.

But the fuel for the techlash fire came at least in part from actual events, including, among others, the revelations Russia used social media platforms to interfere with the 2016 U.S. elections, Cambridge Analytica misused Facebook data for political purposes, and Google was investigated for antitrust violations. Panic spread on a parallel track as new technologies such as deep learning, certain forms of artificial intelligence (AI), and autonomous vehicles came to be seen as both transformative and imminent. Even technology entrepreneurs joined the fray. Elon Musk made news around the world by claiming AI was a “demon” that posed an existential threat to the human race.[14] Bill Gates warned automation was proceeding so quickly that governments should tax robots in order to slow down its progress.[15] And in a widely cited but fundamentally flawed study, Oxford scholars Osborne and Frey predicted technology would eliminate almost half of American jobs in 20 years.[16]

On top of that, initial excitement about the marvels of IT was wearing off. Most people began taking for granted they could use a search engine to access foreign-language news sites for free and have their web browsers automatically translate them into English. Or that they could order products with a couple of taps on their phone and have them show up on their doorstep the next day. Ho-hum.

All of this created a perfect storm for a full-fledged techlash. IT is now widely criticized, at least by elite influencers, for contributing to a host of harms. There is a broad audience ready and willing to believe the “tech is bad” narrative. And a wide range of activists rely on and stoke these conditions to help advance long-held policy agendas (e.g., strong privacy legislation, public ownership of broadband services, an economy predominated by small businesses instead of large corporations, etc.). Indeed, technology and the big companies producing it are many activists’ favorite political hobby horse.

But how broad and deep is this techlash? To listen to those who are most heavily invested in it, the techlash reflects the sentiments of a large majority of Americans who have turned against tech and tech companies. But certainly, at the most basic level, this is not true. As Rob Walker wrote recently in The New York Times, if there were a real techlash, one would expect to see Americans reducing their use of technologies.[17] But in fact, the opposite is occurring, with use of social networks and devices increasing, not decreasing. For example, according to the Pew Research Center, 72 percent of Americans use some form of social media, a percentage that has risen steadily for years and shows no sign of flagging.[18]

A wide range of activists rely on and stoke these conditions to help advance long-held policy agendas.

Could it be that people still use and like these technologies but don’t like the companies making them, and therefore want government to act? Not according to public opinion polling. In fact, public opinion is still by and large favorable toward IT and tech companies. As of summer 2019, 50 percent of Americans believe technology companies have a positive impact on the country, according to Pew survey data, versus 33 percent who believe they are detrimental.[19] To be sure, that is down from in 2010, when 68 percent believed tech companies had a positive effect, versus just 18 percent believing them to have a negative effect.[20] But according to Edelman’s Trust Barometer, a significant majority of the public still maintains a basic faith that tech companies do the right thing most of the time.[21] Their April 2019 survey found that, globally, the technology sector is the most trusted of all industry sectors, with 78 percent of respondents expressing faith in it. And even in the wake of the techlash, this was up 4 percentage points from 2015. The number is even higher for what Edelman calls the “informed” public: 84 percent. American support is somewhat lower, at 73 percent, but is still higher than it is for any other industry.[22] Indeed, 60 percent of people agree that the tech sector is conscious of societal impact and contributes to the greater good. However, Edelman does report that trust in search engine companies and social media platforms declined significantly from 53 percent trusting them in 2017 to 42 percent in 2018. Moreover, 47 percent believed technological innovation was happening too quickly. When it comes to specific technologies, only 56 percent of people trusted blockchain, 55 percent trusted self-driving vehicle technology, and 62 percent trusted AI.[23] We see similar findings from an August 2019 poll from Gallop, wherein, of 25 industries, Americans view the computer industry second-most favorably (with a net positive score of 50 percentage points). The Internet and telephone industries rank lower, but still have strong net positives (13 and 16 percentage points respectively).[24]

So, while public views of tech and the tech industry are less favorable than they once were, they are still by and large quite positive. Meanwhile, there is somewhat mixed evidence that Americans want elected officials to crack down on technology companies. In an Axios-Survey Monkey online poll conducted in November 2017, 40 percent of respondents said they worried the government wouldn’t do enough to regulate tech, while 57 percent said the government would do too much.[25] In a February 2018 poll, that number had increased to 55 and 39 percent, respectively.[26] However, a September 2019 Morning Consult/Advertising Week poll found that the tech industry ranked 15th out of 19 industries U.S. adults said presidential candidates should be more critical of.[27]

The techlash, in fact, appears to be driven by opinion-leading elites, advocates, and pundits. Indeed, across the political spectrum, these critics are sounding shrill alarms of gloom and doom.[28] Liberal icon Robert Reich has said Big Tech has become “way too powerful.”[29] Robert VerBruggen, writing for the conservative National Review, called Google, Facebook, and Amazon “Our Digital Overlords.”[30] And the bipartisan pairing of Bill Galston, a center-left thinker who helped shape President Clinton’s domestic agenda, and Bill Kristol, the center-right thinker and veteran of the first Bush administration, have formed a new group with a reform platform that includes “Challenging the Tech Titans.”[31] And virtually no claim about the malevolence of tech companies or the injury being caused by tech is now too outlandish to generate considerable attention—from killer AI that will enslave the human race to maps on smartphones leading to early onset of Alzheimer’s.[32] Virtually any and all negative claims are now routinely asserted and then widely circulated as truth, and repeated at TED talks, online, and elsewhere, much like other urban myths have spread.

While public views of tech and the tech industry are less favorable than they once were, they are still by and large quite positive.

Joining the fray are an array of economic interests that are more than happy to pile on the techlash, including brick-and-mortar retailers, newspapers and other media, and other industries that have been hurt by technological innovation and competition.

Against this backdrop, many elected officials appear to believe that voters are demanding action, so they have responded with proposals to control technology and Big Tech.

To be sure, fear and opposition to technology is certainly not new. People have long opposed new technologies, fearing they would be unsafe, destroy morals, hurt jobs, harm children, and lead to a range of other purported ills. As the podcast Pessimists Archive has documented, these technologies include tunnels, the telegraph, recorded music, electricity, the elevator, and even the Walkman.[33] Indeed, many of today’s complaints mirror those of yesteryear. We have seen this before. Case in point is the turn-of-the-20th-century techlash against the automobile, wherein some places passed red-flag laws that required a person to walk in front of “horseless carriages” waiving a red flag.[34] (See figure 1.)However, the scope and vociferousness of today’s techlash suggests it might be more serious than in past episodes, and as such deserves a more serious response.

Figure 1: Early techlash: red-flag laws for cars

A black and white photo of men driving a carriage

Description automatically generated

Why Techlash Matters

If one believes IT is largely harmful—“more weapon than tool,” to use Brad Smith’s analogy without the benefit of his nuanced analysis—then techlash is a positive development, akin to the antinuclear movement of the last half of the 20th century, which raised badly needed awareness. But if one believes, as ITIF does, that tech and tech companies big and small are not only largely beneficial, both economically and socially, but vital to future progress, prosperity, and competitiveness—and that any challenges are manageable—then the techlash is deeply problematic.

Some argue that techlash is focused almost solely on big Internet and platform companies, and therefore the cause for concern is somewhat lessened. But while it is true there is less public support for the Internet industry, and that some of the policy measures (e.g., Stop Enabling Sex Traffickers Act, laws defining contractors, etc.) focus more on them, techlash also leads to technology bans, taxes, and regulations that can negatively impact technological innovation more broadly.

Indeed, the risk from techlash is that it could lead to one or both of the following outcomes: a neo-Luddite “smash the machine” response, including government technology bans that would slow productivity and wage growth; or a modern-day Gulliver response wherein regulatory frameworks are so stringent and restrictive, innovation and even consumer welfare can’t flourish. (See figure 2 and figure 3.)

Figure 2: Possible response from techlash: neo-Luddite technology bans

Men working in a factory

Description automatically generated

Figure 3: Possible response from techlash: regulatory overreach to lash down tech innovation

A drawing of a military conflict

Description automatically generated with medium confidence

We have already seen Europe become more skeptical of technology while applying the precautionary principle of regulating potential harms. This has had tangible consequences. According to a report by the McKinsey Global Institute, regulation and the fragmentation of Europe’s digital landscape are barriers to the advantages of digitization, especially scale.[35] And 74 percent of respondents to a survey by Bitkom, Germany’s digital trade association, said data protection requirements are the main obstacle to the development of new technologies—compared with 63 percent in 2018, and 45 percent in 2017.[36]

The general public seeing new technologies such as AI as harmful will slow the development and uptake of the technologies, not only by consumers, but by businesses and governmental organizations. To thrive and be competitive in the next phase of the digital economy, countries must resist techlash and promote acceptance of technology. This starts with developing a stronger dialogue between regulators, experts, developers, and citizens to respond only when needed, and where needed, to ensure responses are targeted and focused, while enabling continued innovation. But ultimately it must include a stronger response from champions of innovation and progress, who must call out techlash for what it is: a deeply regressive force.

22 Techlash Issues

This section examines a range of technology issues that have been raised as part of techlash. To be sure, there are probably others that could have been included, and others that will surely be raised in the future. But we have focused on 22 of the most prevalent, divided into two sections: societal and economic.

Societal Issues

There are a range of claims made against tech and tech companies that relate to societal impacts, in areas such as privacy and human well-being.

Claim #1: Tech Companies Are Destroying Consumer Privacy

Perhaps the most pervasive criticism of large Internet and tech companies is they have given rise to so-called “surveillance capitalism”: the idea that pervasive data collection, including, but not limited to, tracking on websites, is eroding all privacy online.[37] Shoshana Zuboff, who wrote a book with the provocative title, Surveillance Capitalism, has stated:

Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to product or service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence,’ and fabricated into prediction products that anticipate what you will do now, soon, and later.[38]

Many pundits have echoed this type of criticism, lamenting that companies are providing free services to billions of people in exchange for access to their personal data. The common refrain is, “If you’re not paying, you're not the customer—you’re the product.”[39] Policymakers have responded to these concerns to push for stronger rules on privacy, from the General Data Protection Regulation in Europe to California’s new privacy law that is set to go into effect in 2020.[40]

But these concerns are often amplified by stories exaggerating the extent of such tracking.[41] For example, a reporter for The New York Times claimed to have been followed around by hundreds of digital “trackers” after visiting 47 websites.[42] But, as it turned out, many of the digital “trackers” the investigation discovered were not actually tracking users at all, but were scripts, images, and cookies designed to help each website function.[43]

Moreover, these criticisms are wrong in a number of ways. First, “surveillance” implies consumers are unsuspecting subjects being spied on, when in fact, most companies are clear about their growing use and reliance on data. Moreover, most consumers happily accept free services knowing full well they are providing data.[44] Second, Zuboff believes this relationship is “unilateral.” However, the exchange of data for services is not unilateral as long as individuals have the option of using alternative offerings and technologies that do not collect data. There are many of these alternatives available, from companies that offer subscription services without tracking to privacy-enhanced free options that do not use targeted advertising. For example, consumers can use the search company DuckDuckGo as an alternative to Google.[45]

Third, Zuboff’s underlying claim that surveillance capitalism is the result of companies collecting vast amounts of data to control consumers mischaracterizes the valuable work many companies are doing to use data to make better decisions, increase productivity, and deliver customized services. Indeed, consumers are not mindless sheep evil companies can control with access to their usernames and basic demographic information—they are active participants benefiting from sharing their personal data. Not only do users benefit from access to cheap and low-cost services, but they are increasingly empowered by the services’ data through device feedback loops (e.g., using data from personal fitness trackers to improve their health).

Rather than sell personal data to advertisers, most platforms keep control of this information and only sell advertisers access to users.

Moreover, there is widespread misunderstanding about how companies use data, with many of the companies accused of “selling” consumer data when they in fact are not. Rather than sell personal data to advertisers, most platforms keep control of this information and only sell advertisers access to users. Advertisers pay to reach an audience based on several factors, such as geographic location, particular interests or characteristics, and behaviors—including the use of other online services. For example, an advertiser could pay to reach politically inclined males in the District of Columbia metro area. However, these advertisers can only access details about users who see their advertising campaign in the aggregate. So, an advertiser might see that its ad reached 500 males in the D.C. area, but would be unable to see personally identifiable information, such as who those individuals are and if, for instance, any of them are members of Congress.

This is not to say there haven’t been cases wherein companies, particularly data brokers, have gone too far in collecting and selling data without consumer notice and choice, such as selling mailing lists of rape victims and people with genetic diseases.[46] But these extreme outliers do not represent the typical data collection and sharing practices of the average retailer or website, nor should they be the basis for sweeping data privacy laws and regulations. Rather, Congress should pass national privacy legislation that is focused, and includes provisions to ensure adequate enforcement against companies that violate privacy law.[47] Targeted, substantial harms-based regulations and enforcement can help policymakers pinpoint data misuse and make the aggrieved whole.[48]

Claim #2: Online Platforms Are Exploiting Consumers

While some say companies collecting enormous volumes of personal data is a violation of consumer privacy, others argue this data collection is exploitative not because it undermines consumer privacy, but because consumers are not receiving adequate compensation. They believe consumers in the digital economy are getting a raw deal because their data is worth more than the goods and services they get in exchange for it.[49]

Policymakers alarmed by this assumed information asymmetry, such as Sens. Josh Hawley (R-MO) and Mark Warner (D-VA), have pushed for laws that would force companies to tell consumers the estimated value of their data.[50] Others argue the government should give consumers property rights over their personal data.[51] And some pundits make the claim this imbalance is so great, Internet companies should actually pay users for their data. New York Times journalist Eduardo Porter has written, “Getting companies to pay transparently for the information will not just provide a better deal for the users whose data is scooped up as they go about their online lives. It will also improve the quality of the data on which the information economy is being built.”[52] Tech critic Jaron Lanier even went so far as to say that because tech would destroy most jobs, people should earn a living by having companies pay them for using the Internet.[53] These calls surely motivated California Governor Gavin Newsom to propose a “data dividend” that would require companies to pay users for their data.[54]

However, the exchange of data is a fundamentally different exchange of value than other transactions. Unlike most goods, data is non-rivalrous: Many different companies can collect, share, and use the same data simultaneously. Similarly, when consumers “pay with data” to access a website, they retain the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services. This exchange of data for services is not a zero-sum game; businesses and consumers mutually benefit from sharing data.

Many activists are calling for the proverbial “free lunch.” They want users to get access to free services without providing their personal information in return—the equivalent of wanting to watch television without ads or cable subscriptions.

Moreover, requiring companies to pay for user data would likely force them to stop placing targeted ads, which would lower their revenues and lead them to cut their services or shut down entirely. Others would switch to subscription models that do not rely on monetizing data—something that would only exacerbate the digital divide.[55] This is particularly true at a global level. Officials in some developed nations wrongly use the analogy that data is the new oil, and worry tech platforms are exporting their citizens’ data and exploiting them in return. The reality is tech platforms in developed nations are actually likely subsidizing most residents of lower-income nations. Because they spend little, their data is worth little, yet they receive the same digital goods and services as rich consumers in Beverly Hills, California, or Great Falls, Virginia.

Finally, the “dividends” produced would be vanishingly small. For example, if Google and Facebook had doled out half of their 2017 profits to their users around the world, the checks would have been worth just $3 each.[56]

The reality is many activists are calling for the proverbial “free lunch.” They want users to get access to free services without providing their personal information in return—the equivalent of wanting to watch television without ads or cable subscriptions. But companies cannot provide goods or services without earning income, which can occur through either direct payments from customers or indirect payments from advertisers and sponsors. Rather than upset the digital ecosystem or undermine entire business models that would penalize the have-nots, it is better for policymakers to create targeted data-privacy protections that promote transparency and prevent misuse of consumer data.

Claim #3: Online Companies Manipulate Consumers Through Dark Patterns

Online platforms have enabled new business models and new methods of engaging with users and potential customers. However, some argue that these business models may give companies incentives to convince users to take actions that undermine their best interests, such as spending excessive amounts of time engaging with social media, inadvertently sharing their data, or unwittingly spending money. For example, academic Zeynep Tufekci has argued that “ad-based businesses distort our online interactions… ad-based financing means that the companies have an interest in manipulating our attention on behalf of advertisers, instead of letting us connect as we wish.”[57] While businesses have always competed for potential customers’ attention, some believe online platforms create opportunities to do so that are particularly exploitative.

Today’s concerns about consumer manipulation by technology firms echo social critic Vance Packard’s warnings about supposedly manipulative television commercials in his 1957 book, The Hidden Persuaders, which focused on the ways advertisers leverage psychological techniques to be more persuasive. In particular, some allege that “dark patterns”—which are digital design features that rely on behavioral psychology to trick users into performing particular actions, such as clicking an ad or spending more time on a webpage, than they would otherwise—are central to how some firms do business.[58] For example, former Facebook president Sean Parker (and creator of the digital-piracy application Napster) said the driving question behind Facebook’s product development was, “How do we consume as much of your time and conscious attention as possible?” and that they exploited psychological vulnerabilities. “God only knows what it's doing to our children’s brains,” Parker has lamented.[59] What exactly constitutes a dark pattern can vary from the obviously exploitative to just slightly frustrating; and “dark pattern” has become somewhat of a catchall term for design choices that simply might not benefit the user. Some examples are indeed nefarious and anti-consumer, such as quietly tacking on additional charges or items to e-commerce users’ shopping carts, making it difficult to cancel a subscription, and steering users away from privacy controls that limit the amount of data they share. Others are innocuous and likely consumer friendly, such as auto playing videos and infinite scrolling. Concerns about dark patterns prompted Sens. Deb Fischer (R-NE) and Mark Warner (D-VA) to introduce the Deceptive Experience to Online Users Reduction (DETOUR) Act in April 2019, which would prohibit large online platforms from relying on dark patterns to intentionally impair user decision-making.[60]

While many design features referred to as dark patterns are prevalent throughout the Internet, and are likely effective to some degree, it is unclear what their actual impact is on consumer behavior.[61] Moreover, prohibiting the use of dark patterns may prove difficult, as the line between dark pattern and effective design choice is often not clear. For example, the DETOUR Act prohibits the use of user interfaces for the purpose of “obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.”[62] Tactics such as adding things to users’ shopping carts could fit under that category and be easily prohibited, but other cases are not clear-cut. Determining the difference between a well-designed interface that makes a user want to choose a particular product or service and one that subverts a user’s autonomy into doing so may be impossible. Additionally, some features commonly described as “dark patterns,” such as infinite scrolling (which enables a webpage to continuously load more content as a user scrolls), are features many users value, even if they make users spend more time on a website then they intended. And in some cases, things described as “dark patterns” are simply age-old advertising techniques, such as e-commerce sites including a message saying “45 people are also viewing this product,” to prompt users to act quickly so they don’t miss out on a deal, even though that number is often randomly generated.[63] While not particularly consumer-friendly, such tactics are typically not considered harmful. To best address truly anti-consumer dark patterns, the Federal Trade Commission (FTC) should step up its enforcement under its Section 5 Unfair and Deceptive Practices authority.

To best address truly anti-consumer dark patterns, the FTC should step up its enforcement under its Section 5 Unfair and Deceptive Practices authority.

Claim #4: Social Media Reduces Societal Well-Being for Children

New cultural products have often been blamed for causing social problems. As Jason Feifer noted in his Pessimists Archive podcast, “When novels first appeared in America, they were accused of corrupting the youth, of planting dangerous ideas into the heads of housewives, and of distracting everyone from more serious, important books.”[64] In 1961, Federal Communications Commission (FCC) Chair Newton Minnow called television a vast wasteland that provided virtually no societal value.[65]

Today, we are seeing a similar backlash, but against social media for reducing social well-being. One 2007 NPR headline read, “Study Sees Rise in Narcissism Among Students,” describing research that shows a steady increase in scores for a test designed to identify narcissistic personality traits, which the researchers said was fueled by visits to websites such as MySpace and YouTube.[66] Though the study found that scores had increased since the test’s introduction in 1982, well before the advent of social media, the framing of the article reflects an increasingly common concern that social media use has adverse impacts on mental health and causes antisocial behavior.

These concerns are aptly laid out in a 2017 Atlantic article titled “Have Smartphones Destroyed a Generation?” written by Jean Twenge, a psychology professor and author of books Generation Me andiGen.[67] Around 2012, greater than 50 percent of Americans owned a smartphone for the first time, which Twenge alleged is an inflection point in shifts in the well-being of American teenagers: Rates of teen depression and suicide have increased significantly since 2011; 12th-graders in 2015 went out less frequently than 8th-graders did in 2009; 56 percent of high school seniors go on dates, while 85 percent of baby boomers and Gen Xers did as high school seniors; the number of teens who met with friends almost daily decreased by 40 percent from 2000 to 2015; teens’ level of reported happiness is inversely proportional to the number of hours they spend on social media; boys’ depressive symptoms increased by 21 percent from 2012 to 2015, while girls’ increased by 50 percent—the list goes on.[68] Twenge also noted that some of this decline in well-being, particularly for girls, could stem from an increased likelihood of experiencing cyberbullying—according to Pew survey data from 2018, 59 percent of U.S. teens report having been bullied or harassed online.[69]

The trends Twenge identified are real and disturbing, but it would be overly simplistic to lay the blame solely at the feet of social media companies. First and foremost, a correlation between social media use and reduced well-being does not prove causation. For example, in response to a 2019 study from the American Psychological Association finding a link between social media use and a rise in mental health disorders in teens, clinical psychologist and assistant professor in the department of psychiatry at the University of Alabama at Birmingham, Aaron Fobian, warned, “We can’t say for certain that the rise we’re seeing is the direct result of social media use. For example, teens could have depressive or anxious symptoms and therefore spend more time on social media outlets to look for a way to connect.”[70] While this, of course, does not mean causation does not exist, it highlights the need for increased research on the links between social media use and well-being.

Second, social media platforms can be used responsibly, so many of these problems arise from systemic failures to address other aspects of these issues effectively. Parents bear much of the responsibility for their children’s behavior, and can and should monitor and set rules on their access to technology, including restrictions on screen time and monitoring of use. Technology companies can and are doing more to help. For example, Apple recently upgraded its iPhone software to include parental controls, including screen-time controls; and in 2018, Facebook developed screen-time monitoring and control tools for its website and Instagram.[71]

Laws addressing bullying should include proportionate and appropriate punishments for offenders, and cover bullying and cyberbullying that occurs outside of school property.

With regard to cyberbullying, a 2019 Pew survey of teens found that many experience cyberbullying, 66 percent of Americans feel social media sites do only a fair or poor job of addressing the issue, 55 percent think law enforcement does a fair or poor job, 58 percent think teachers do a fair or poor job, 64 percent think bystanders do a fair or poor job, and 79 percent think elected officials do a fair or poor job.[72] Many social media platforms are making efforts to combat cyberbullying. Instagram has a history of developing digital tools to moderate content from spam to offensive comments. For example, in 2018 it launched an AI-based tool to detect bullying in comments on photos, and it announced in 2019 that it would deploy AI to detect bullying in the photos themselves.[73] However, governments should enact legislation that makes bullying, including cyberbullying, a crime. Though all 50 states have laws that address bullying, these laws can vary widely: Very few identify bullying as a criminal offense, some do not define cyberbullying as bullying, some do not require schools to take action to prevent bullying, and some only cover bullying that occurs on school campuses.[74] Laws addressing bullying should include proportionate and appropriate punishments for offenders, and cover bullying and cyberbullying that occurs outside of school property.

Even with increased parental involvement in controlling children’s online activities, and laws that productively address cyberbullying, it is clear more needs to be done to address youth mental health, regardless of any connection to social media usage—for example, increased access to mental health services and interventions to help children adapt to and use technology in ways that are not deleterious are sorely needed.

Claim #5: The Internet Creates Filter Bubbles

Techlash critics argue the Internet, especially through social media and search engines, polarizes societies through filter bubbles—echo chambers in which Internet users only consume information from like-minded sources.[75] Activist Eli Pariser coined the term “filter bubble” in 2011, but in 2009, American Legal Scholar Cass Sunstein predicted individuals would increasingly inhabit echo chambers. Sunstein has since argued that personalized social media news feeds have contributed to the political divide in the United States by creating informational cocoons that breed extremism.[76] And to many, the surprise decision of U.K. voters to leave the European Union, along with the election of U.S. President Donald Trump, confirmed the existence of filter bubbles.[77] Indeed, Wired published an article titled "Your Filter Bubble is Destroying Democracy" two days after the 2016 U.S. presidential election.[78] If filter bubbles exist, the ramifications are potentially severe, as many argue democracies require voters who understand a variety of views.[79]

While U.S. society has become more polarized, Internet technologies are not the cause. Indeed, a 2017 study found that between 1996 and 2012, the group that became the most polarized was individuals 75 and older—the group least likely to use the Internet. During the same period, the polarization of individuals 18 to 39, 80 percent of whom use social media, barely increased.[80] Moreover, a 2015 study of social media users in Germany, Spain, and the United States found that most users inhabit ideologically diverse networks, and social media use actually reduces political polarization.[81] Indeed, the research found that over 75 percent of users are in networks in which they disagree ideologically with more than 25 percent of other individuals.[82]

Research shows the alleged effects of the filter bubble phenomenon are significantly overstated or even do not exist.

Even the studies that find some evidence of filter bubbles on the Internet only provide tepid support to the hypothesis that such bubbles are significantly increasing polarization. For example, a 2016 study found that the link between the Internet and polarization is inconsistent and may only affect individuals who frequently consume news.[83] Another study that analyzed the web-browsing activities of 50,000 U.S. Internet users found a link between social media and search engines to increasing polarization between individuals. However, that study also found that social media and search engine use are associated with an increase in exposing individuals to material outside their political spectrum. In addition, the researchers found that the majority of online news consumption stems from individuals visiting the home pages of their favorite news outlets, which are usually mainstream sources.[84]

As such, research shows the alleged effects of the filter bubble phenomenon are significantly overstated or even do not exist. And in some places where it does exist, it may actually be a positive feature, as it allows for virtual communities to develop among tight-knit groups.[85] This does not mean digital and media literacy is not important. One of the reasons people believe filter bubbles exist is they overestimate the degree to which individual factors impact personalized search results. Factors such as location and language, not past browsing history, are the major determinates of different search results.[86] More support for policies and programs that increase digital and media literacy, including in public schools, can help users become better consumers of news and information. In particular, this type of training can help individuals learn to differentiate between real news and fake news, as well as make use of new tools, such as browser extensions that automatically show users articles from other perspectives they might not otherwise see.[87]

Claim #6: The Internet Is Enabling Extremism and Hate Speech

Many worry that social media platforms are becoming hotspots for the proliferation of hate speech and extremism online. And indeed, there is a cause for concern, as social media and other web applications offer the potential for radical groups to more easily recruit followers. One scholar found, not surprisingly, that extremism in online hate groups correlates with more online participation in the groups.[88]

However, there is no proven connection between consumption of violent extremist online content and the actual adoption of extremist ideologies or violent extremist actions.[89] A Rand Institute study concluded that while the Internet can facilitate the process of terrorist radicalization, it neither accelerates it, nor allows radicalization to occur without physical contact, nor supports self-radicalization without contact with others.[90] Others scholars are skeptical of the Internet playing a significant role in violent radicalization.[91] Nevertheless, there are notable examples of websites, such as 8chan, that glorify radicalization, hate speech, and violence.[92] Major platforms have acknowledged that they can do more to moderate content, such as YouTube’s announcement that it will update its policies to better remove hateful and supremacist content.[93]

The proliferation of online hate is a troubling and growing issue, but policymakers need to ensure responses to it do not curtail beneficial speech.

On some websites, takedowns occur whenever users or the company’s automated tools flag posts that violate the website’s terms of service, and get sent to moderators to review prior to removal. Others have moderators actively remove content that violates those companies’ terms of service. In time, this process will get better as platforms develop better tools to automatically identify and remove prohibited content. The problem is, automatically identifying the correct information to take down is not easy. Satire, for example, often mirrors and mocks negative posts, and can be hard to detect. Legitimate news coverage of violence, including war crimes, may also be flagged and removed because it shares the properties of violent content.[94] However, over time and with lots of trial and error, platforms will be able to more effectively use algorithms to take down prohibited content and prioritize how items are displayed in news feeds. For example, Facebook is very effective at automatically flagging and removing terrorist content.[95]

The proliferation of online hate is a troubling and growing issue, but policymakers need to ensure responses to it do not curtail beneficial speech. The law at the heart of this debate in the United States is Section 230 of the Communications Decency Act, which ensures online companies are not liable for the content posted by their users.[96] This law states that Internet intermediaries are not publishers when facilitating the speech of others, such as user reviews or postings on social media. Unfortunately, some proposals would overcorrect and risk curtailing this beneficial speech. For example, David Ibsen, the executive director of the Counter Extremism Project, has called on Congress to “remove companies’ blanket protections from liability for content posted by third-parties on their platforms when that content is incontrovertibly known to be extremist in nature or otherwise harmful.”[97] Several lawmakers have suggested doing just this, such as Sen. Kamala Harris (D-CA).[98]Other Senators, including Mark Warner (D-VA), Amy Klobuchar (D-MN), Ted Cruz (R-TX), and Josh Hawley (R-MI), have made similar proposals or suggestions to have government agencies enforce online speech on platforms in order to address various perceived problems, such as fake content and extremism.[99]

Unfortunately, the threat of liability and fines stops companies’ attempts to improve automated takedowns. Without Section 230, as companies would be liable for errors made by automated takedowns, they would likely overcorrect and take down legitimate content. Moreover, they would face difficulty improving these tools because they would need to know what does not work. But this knowledge could trigger liability as well.[100] Rather than rush to create a new framework for regulating speech online, and risk accidently harming legitimate speech or reducing the effectiveness of automated takedown mechanisms, policymakers should work with the private sector to improve automated takedown mechanisms, while ensuring platforms have moderation policies that protect free speech.

Claim #7: Social Media Facilitates Disinformation and Deepfakes

Disinformation—defined as “false content spread with the intent to deceive, mislead, or manipulate”—was a problem long before the dot-com era, as virtually every mass media technology, including print, radio, and television, has been subject to manipulation, propaganda, and censorship in order to shape public opinion.[101] This manipulation may be for political expediency, such as to deceive voters, or for financial gain, such as when unscrupulous traders manipulate financial markets in order to defraud investors. For example, railing against Big Telegraph in 1872, The London Times wrote, “It is precisely the extension of the electric telegraph across the Atlantic which has facilitated the instant publication of all such words and criticisms, generally without their context and not infrequently with malicious editions in every city of the United States. The mischief that is done can hardly be overstated.”[102]

The problem has grown more acute in recent years. Disinformation from foreign actors, most notably from the Russian government and other actors under its control, was directed at shaping the outcome of the 2016 U.S. presidential election, the Brexit vote, the 2017 French presidential election, and countless other elections in Europe and elsewhere.[103] And these bad actors have used new digital tools to spread fake news more easily.[104] The principle medium for these disinformation campaigns is social media, although the effects can spill out into other media channels as well.

One method of spreading fake news is via ads on social networks. After the 2016 U.S. presidential election, Facebook discovered that the Kremlin-backed Internet Research Agency had secretly run around 3,000 ads on Facebook and Instagram that were seen by 10 million people in the United States.[105] These ads attacked Hillary Clinton, boosted Donald Trump, and fostered divisiveness in American society on hot-button topics such as race, gun rights, immigration, and LGBT issues.

In response, Facebook has announced a series of changes to prevent these types of deceptive ads in the future, including by making advertising more transparent, improving enforcement for improper ads, tightening restrictions on ad content, and increasing requirements to confirm the identity of advertisers.[106] However, there are limits to the effectiveness of these techniques. For example, social networks must also balance free-speech rights and recognize that additional restrictions on advertising can have a negative impact on beneficial activity. Facebook has noted that, while the ads run by the Internet Research Agency violated its policies because they hid the true identity of the advertiser, they did not violate their ad-content policies.[107] To help address this problem, Congress should pass legislation such as the Honest Ads Act, which would require social media companies to increase transparency of paid political advertising on their platforms and make reasonable efforts to ensure foreign entities do not purchase political ads. This type of requirement would create parity between the transparency requirements for online and offline political ads and reduce the risk of foreign interference in U.S. elections.

Another tool is bots: automated programs that often masquerade as human users on social media. Bots play an active role on social media sites. A 2017 Pew study found that bots generate two-third of the links on Twitter.[108] And Chengcheng Shao et al. found that accounts that actively spread misinformation are significantly more likely to be bots.[109] Perhaps even more significantly, bots not only amplify fake news, but they often strategically target messages at influential users, duping these individuals into sharing fake news with their followers, thereby creating viral content.[110] Bots have also been involved in a number of financial scams. For example, researchers discovered fraudulent activity by two bots had generated a spike in the price of Bitcoin from $150 to over $1,000 in two months.[111]

Platforms are getting better at identifying which accounts are run by bots, and then only allowing accounts that disclose this fact and engage in legitimate activities. But those using bots continue to develop techniques to evade these types of controls, creating a cat-and-mouse game between social media sites and these bad actors.

Platforms are getting better at identifying which accounts are run by bots, and then only allowing accounts that disclose this fact and engage in legitimate activities.

Researchers have found that the bigger the problem is, even without bots, users favor falsehoods over truths on social media—with fake news reaching more people, getting reshared by more users, and spreading faster than true stories.[112] Part of the reason is likely due to the content of fake news, which is novel and elicits strong emotional reactions. And part of it likely has to do with the ease with which users on social media platforms can share content, the lack of incentives for users to vet content, and the lack of penalties for sharing false information.

A third source of disinformation is deepfakes—realistic-looking video clips altered, typically by AI, to portray someone doing or saying something that never actually happened. Deepfakes, a portmanteau of “deep learning” and “fake,” have been around since the end of 2017, created mostly by people editing the faces of celebrities into pornography. In April 2018, comedian and filmmaker Jordan Peele worked with BuzzFeed to create a deepfake of President Obama, kicking off a wave of fears about the potential for deepfakes to turbocharge fake news.[113] The concern is understandable, as deepfakes can be very realistic, are easy to make with access to enough training data and one of the many deepfake-making programs, and are easily shareable online.[114]

Deepfakes present a unique challenge, as they can fool both humans and computers, which makes it difficult for platforms to moderate this content. The private sector appears to be taking this concern seriously, as companies such as Facebook have announced significant partnerships with academic researchers in order to find solutions.[115] However, even as companies and researchers develop new tools to automatically identify deepfakes, it is likely these tools will later be used to simply create better deepfakes. Policymakers should not expect the private sector to be able to address this issue on its own, and should work with businesses, academia, and news outlets to develop additional tools and techniques to respond to this problem. However, it should not seek to limit the underlying technology that makes deepfakes possible, because this same technology has many legitimate applications in professional video editing and filmmaking.

Claim #8: Video Games Are Causing Gun Violence

In the wake of two 2019 mass shootings in El Paso, Texas, and Dayton, Ohio, President Donald Trump and Minority Leader Rep. Kevin McCarthy (R-CA) blamed violent video games for the atrocity.[116] President Trump specifically called out “the gruesome video games that are now commonplace” for creating “a culture that celebrates violence.”[117] These claims prompted ESPN to delay airing a video game tournament, and pushed Walmart to temporarily remove all video game displays from its stores.[118]

These claims echo past moral panics. In the 1940s and 1950s, comic books were decried for causing violence, leading to at least one congressional hearing.[119] Similar concerns over video games also occurred throughout the 1970s, 1980s, and 1990s.

Starting in 1976, with a game called “Death Race” that rewarded points for driving over pedestrians dubbed “gremlins,” critics claimed these game mechanics would prompt violent behavior in their players.[120] In the 1990s, this moral panic drove activists to call on Congress to shut down arcades that featured violent games.[121] And after the launch of the arcade game Mortal Kombat in 1993, Congress held a hearing about whether the fighting game incited violence.[122]

There is no causal link between violent video games and actual violence. In fact, several studies show the opposite to be true.

The problem with this moral panic, both then and now, is there is no causal link between violent video games and actual violence. In fact, several studies show the opposite to be true. One study from 2011 compared the volume of sales of violent video games from 2005 through 2008 and related it to violent crime incidents, finding that when a very popular violent video game came out, violent crime actually tended to go down.[123] A previous study from 2009 also found a decrease in violence caused by video games.[124] In 2017, the American Psychological Association proclaimed, “Scant evidence has emerged that makes any causal or correlational connection between playing violent video games and actually committing violent activities.”[125] Moreover, while these violent video games are released all over the world, including in Europe and Japan, the United States has a much higher murder rate compared with other developed countries.[126]

Video games have been a scapegoat on the issue of gun violence in the United States for many decades. If policymakers want to address this major challenge, they should tackle it directly, through steps such as much tougher gun control laws and increased expenditures on mental health, especially in schools and for at-risk families.

Claim #9: Big Tech Is Destroying the News Industry

Journalism has been in decline in the United States over the last decade. Jobs in U.S. newsrooms dropped by 25 percent between 2008 and 2018, with the greatest decline being in newspapers.[127] These declines have continued, with the U.S. news business losing roughly 3,000 jobs in the first 5 months of 2019.[128] Much of this decline has been in local news. One article found that the circulation of metro, midsize, and small newspapers dropped around 40 percent between 2012 and 2018.[129]

Many critics, such as the nonprofit Save Journalism Project, claim tech companies such as Google and Facebook bear significant responsibility for the current state of affairs because they are taking away profits with their dominance in the digital advertising market.[130] They point to claims by the News Media Alliance, a trade association of the newspaper industry, which estimated that in 2018 Google made $4.7 billion off its Google News product—a claim many serious journalists rejected as being “absurd” and the product of “sloppy work.”[131] Others, such as Sen. Bernie Sanders (I-VT), have used these ideas to call for the FTC to use its antitrust power to break up major digital advertising platforms.[132]

While journalism has changed in recent years, most of it has to do with how digital disruption itself has impacted the news industry. With the Internet came rapidly changing business models, and many newspapers lost revenue as readers began accessing free articles online and cancelling their subscriptions. Revenue from classified ads also dried up as websites—such as Craigslist, Monster.com, and LinkedIn—became more popular alternatives.[133] Classified ads were long a moneymaker for newspapers, enabling them to support the journalism side of their businesses. In response, some media companies have put up subscriber paywalls and augmented their revenues with digital advertising.[134] A number of media companies have adapted to the new digital environment, but many others have not.

Some media companies want news aggregators, such as Google News, to pay to link to their sites. But attempts at doing this have backfired. In 2014, Spain passed legislation requiring news aggregators to pay news publishers for posting links, headlines, or snippets of articles on their websites.[135] As a result, Google News shut down service in Spain, and Spanish publishers found they were worse off without these free referrals.[136] One study found the shutdown of Google News reduced overall news consumption by 20 percent, and page views on Spanish media websites by 10 percent.[137] Another study by a trade association of Spanish publishers found comparable results.[138] Similar disputes in France and Belgium have resulted in agreements between Google and local news publishers—rather than Google delinking search results.[139]

Policymakers need to be cautious with large efforts that disrupt how individuals access news.

The decline in funding for journalism, especially local journalism, is a major challenge for media companies. At the same time, the declining cost of access to news has benefited consumers and is not inherently a bad thing, as it has increased access and knowledge for many. And missteps, as Spain has shown, could actually hurt news companies more than help them. Policymakers need to be cautious with large efforts that disrupt how individuals access news, and should instead look to efforts by organizations such as the Knight Foundation, which is investing in scalable organizations that are building new business models, strengthening investigative reporting, promoting news literacy, and engaging with audiences in new ways.[140]

Claim #10: Technology Is Leading to Pervasive Surveillance

In addition to concerns about online consumer privacy, some critics are concerned that the emerging era of a fully connected world—wherein sensors, cameras, and microphones are embedded in a vast array of networked devices—will inevitably lead to pervasive surveillance. Accusations fly that with the introduction of new technologies, such as police-worn body cameras and facial-recognition systems, a Big Brother security state is tracking citizens’ every movement. These fears have led to significant public resistance to governments introducing new projects, such as ID systems and smart-city initiatives.[141]

Some are concerned about data collection by the government, such as compilations of biometric data by law enforcement agencies. For example, the Center on Privacy and Technology has accused the U.S. government of essentially creating a “virtual, perpetual line-up,” with law enforcement having access to such databases as driver’s license photos, passport photos, and mug shots.[142] Others are concerned about “Little Brother”: data collected by the private sector that government can then access to monitor its citizens.[143] Critics says their homes are no longer private, as video-equipped doorbells and smart speakers allow companies to spy on families and their neighbors.

Those who worry about government surveillance have a legitimate basis for their concerns. Governments in some countries have disturbing histories of intruding into the private lives of their citizens—and many fear they may revert to this type of activity in the future. And other countries, such as China, significantly limit the personal freedoms of their citizens, and use surveillance to threaten human rights. Concerns about surveillance reached new heights following the leak of classified documents by Edward Snowden, which showed that, at a minimum, there was a significant disconnect between the amount of surveillance conducted by the intelligence community and what many believed was lawful.[144]

Concerns about private-sector surveillance are less justifiable. While there have been some notable infractions—such as rogue employees tracking the location of ride-share customers, and engineers reviewing video and audio recordings from smart home devices where consumers were unaware—these incidents are uncommon. Moreover, companies have a strong market incentive not to engage in unauthorized surveillance because they face the risk of substantial customer backlash, and often fines by government.

While many technologies can be used for surveillance, these types of uses are less inevitable or likely in democratic, rule-of-law nations. Governments can adopt new technologies without becoming a surveillance state by putting in place reasonable controls to ensure their uses have appropriate oversight and do not intrude on citizens’ rights. Critics often complain that adopting new technologies risks going down a slippery slope, when in practice, the slope does not appear to be too slippery. For example, in 2012, the U.S. Supreme Court ruled in United States v. Jones that police cannot use the Global Positioning System (GPS) to track individuals without a warrant; and in 2018, the Supreme Court similarly held that accessing historical cell phone location records is unlawful without a search warrant.[145] Moreover, there is a long history of members of the public expressing similar privacy concerns about new technologies such as automatic license plate readers, RFID-equipped passports, drones, and red-light cameras—and yet their sky-is-falling rhetoric has proven unfounded.[146]

Many new technologies will potentially be used to track and monitor individuals. However, the risk of these technologies being used for that purpose will remain low so long as Congress continues to provide strong oversight of law enforcement and the intelligence community, and strengthens Fourth Amendment protections where necessary to ensure government does not gain access to citizens’ location data without a search warrant. In short, the answer to risks of mass surveillance are best addressed by the right rules, not by banning what is virtually always societally beneficial technology.

Claim #11: Internet Service Providers Want to Block and Degrade Internet Traffic, and That Would Have Dire Consequences

One of the earliest examples of techlash relates to net neutrality—the notion that all data traffic on the Internet must be treated exactly the same—which has been the subject of widespread public debate for over a decade. Net neutrality advocates have long claimed that “Big Broadband” (i.e., cable and telco broadband providers) are plotting to design and operate a network that gives them gatekeeper power to dictate what people can do and see on the Internet. If these advocates are to be believed, the stakes are high. The Electronic Frontier Foundation has claimed that “an attack on net neutrality is an attack on free speech,” and that without a neutral Internet, speech and commerce on the web could grind to a halt.[147] Activist group Free Press wrote, “Without Net Neutrality, [Internet service providers] could block speech and prevent dissident voices from speaking freely online.”[148] “Without Net Neutrality,” the group claimed, “people of color are losing a vital platform.”[149] One commentator writing in Wired claimed that net neutrality regulations are needed, lest “[I]nternet services would begin to resemble cable-TV packages, where subscriptions could be limited to a few dozen sites and services.”[150]

Note the repetition of the word “could” in each of those claims. Net neutrality fearmongers rely on dystopian speculation of the worst possible actions by broadband providers, and offer little analysis as to why these companies would actively choose to diminish the value of their services and undermine the potential uses broadband could be put toward. It is increasingly inconceivable that a broadband provider would attempt to block even the services that compete directly, such as video streaming or telephony. Imagine an Internet service provider (ISP) company actually blocking Netflix–their customers would howl. Blocking political speech is even more difficult to imagine—Congress, advocates, and the media would howl—but this doesn’t stop net neutrality activists from dreaming up such implausible nightmares.

It is increasingly inconceivable that a broadband provider would attempt to block even the services that compete directly, such as video streaming or telephony.

Free Press is perhaps the worst offender in this regard, claiming, for example, that broadband providers can now “block political opinions they disagree with.”[151] The group has the temerity to claim, “When activists are able to turn out thousands of people in the streets at a moment’s notice, it’s because ISPs aren’t allowed to block their messages or websites.”[152] This argument is likely simply poor sentence construction on the part of Free Press’ authors (one would hope an activist’s ability to turn out thousands “in the streets” has causes other than a non-blocked Facebook event page—maybe a compelling framing of a problem, a unique description of a future a community should work toward, or the fundamental justness of their cause, for starters). But the thrust of their argument remains: But for the strongest possible net neutrality rules, the local cable or telco company will scour the Internet for political speech they don’t like and prevent customers from seeing it. This accusation is completely baseless.

Perhaps the prospect of a heavily curated, walled-garden Internet was a legitimate concern 30 years ago, when it was still a nascent technology, bandwidth was scarce, and we were unsure how competitive dynamics would play out (particularly in video delivery). The only real net neutrality violation occurred over 14 years ago, when Madison River, a small, local telephone company, attempted to block Voice over Internet Protocol (VoIP) applications such as Skype that competed with its phone business from operating over its network.[153] (They backed down almost immediately after justified outrage.) Even the famous Comcast/BitTorrent case (Comcast v. FCC) is often misunderstood. Comcast indeed functionally blocked BitTorrent uploads for a short period of time, but not for malicious or arbitrary reasons.[154] In reality, the company was trying to resolve severe problems with latency-sensitive applications on networks whose neighbors were using BitTorrent.[155] Granted, the company unsuccessfully took unilateral and nonpublic action to fix the problem, which the BitTorrent CEO acknowledged was caused by the BitTorrent protocol.[156] The problem was ultimately resolved through changes to the BitTorrent protocol.[157]

Today, it is clear ISPs have no interest in actively blocking or degrading traffic. For over a year, no net neutrality regulations have been in place, and still there have been no legitimate net neutrality violations.[158] We all still seem to be able to communicate over the Internet, whether it be to scroll through recipe GIFs, watch others playing video games, or turn out thousands into the streets—strongly indicating the Free Press’s fears were wildly overblown. Claims by activists that odious net neutrality violations are just around the corner—that broadband providers don’t want to enflame the issue during an election year, for example, are increasingly desperate and absurd the longer we go with no rules in place.

There is real opportunity to craft balanced legislation that gives end users and businesses the confidence to explore the web and scale new Internet offerings without fear of interference.

This isn’t to say, however, the best net neutrality regime is one with no up-front rules at all. There is real opportunity to craft balanced legislation that gives end users and businesses the confidence to explore the web and scale new Internet offerings without fear of interference—but doesn’t come with the innovation-chilling and investment-restricting effects of common-carrier classification.[159] The overly strict rules put in place under the Obama administration relied on the expansive laws designed for the old, explicit monopoly telephone network. Instead, legislators should craft new rules that recognize the increasingly competitive nature of broadband, instead of treating broadband as a static utility service.

While the blocking or degrading of speech or any legal content by ISPs is not a legitimate concern, legislation should still bar such practices. Beyond the banning of blocking and throttling, legislation could allow some room for data differentiation that improves performance of real-time, next-generation applications such as augmented reality and robotics control.[160] In any event, the important goal should be a balanced regime that allows for permissionless growth of both broadband networks and the services and communications that run on top of them.

Claim #12: Big Tech Is Biased Against Conservatives

Over the last few years, the view that big Internet companies are biased against conservative voices has grown. In August 2018, President Trump tweeted that Google was suppressing news stories from right-leaning publications from appearing in its search results.[161] At a September 2018 congressional hearing, some members of the House Energy and Commerce Committee blasted Twitter CEO Jack Dorsey over allegations that Twitter had an anti-conservative bias.[162] Most recently, it was reported in August 2019 that the White House was developing an executive order that would combat alleged anti-conservative bias at social media companies. These policymakers have espoused the increasingly prevalent belief among certain conservatives that online platforms are suppressing conservative viewpoints by unfairly blocking or “shadow banning” (allowing users to post, but significantly limiting their visibility to others) conservative users or otherwise suppressing conservative views online. One reason for the popularity of this claim may be many of the leaders and workers at these platforms are in fact decidedly liberal in their political orientation. For example, employees of Alphabet’s subsidiaries (e.g., Google) overwhelmingly donate more to Democrats than to Republicans.[163]

Though unfounded, these claims do engender support for policies to regulate online platforms in ways that would harm consumers, businesses, and democratic values alike.[164] Complaints that Facebook, Google, and Twitter have an anti-conservative bias are wrong—or at minimum are significantly overblown. The few attempts to provide evidence of this claim have been shown to be lacking sufficient data, or use flawed analysis.[165] For example, in August 2019, President Trump tweeted that Google’s search algorithm manipulated between 2.6 million and 16 million Americans into voting for Hilary Clinton in the 2016 presidential election.[166] Trump was referencing the July 2019 Senate testimony of psychologist Dr. Robert Epstein, who claims his research demonstrated this clear example of anti-conservative bias in Google’s search engine. Senator Ted Cruz cited this testimony as proof of pervasive anti-conservative bias by Big Tech, appealing to the credibility of Dr. Epstein, who, he claimed, is a Democrat and voted for Hilary Clinton (although he regularly publishes articles and makes appearances in conservative media such as Breitbart).[167] However, Dr. Epstein’s actual research was based on a study of the search results of just 21 undecided voters in 2017, and all but 1 of the report’s citations supporting his analysis were of papers and articles written by Dr. Epstein himself.[168] Dr. Epstein has been making such claims for years, arguing in 2015 that Google could be manipulating search results to swing elections—also supported by equally flimsy research.[169] Perhaps unsurprisingly, Dr. Epstein began painting Google as a politically biased bad actor in 2012, after the company started warning users searching for his website that it contained malicious code as a result of a hack (Dr. Epstein threatened to sue Google for not removing the warning, despite not adequately addressing his site’s security, and then shortly thereafter began writing articles about the need for Google to be regulated).[170]

While many conservatives do genuinely believe online platforms exhibit anti-conservative bias, there is good reason to believe that many amplifying these arguments are doing so in bad faith. For example, after a musician named Joyce Bartholomew reported her antiabortion song having been removed from YouTube for a terms-of-service violation, right-leaning websites and Bartholomew herself seized on it as an example of anti-conservative bias from the platform, implying the removal was due to the subject matter.[171] However, the actual reason for removal was Bartholomew’s use of bots to artificially inflate the video’s view count, which violates YouTube’s terms of service.[172] And after Facebook made a technical-moderation error in reducing the visibility of the Facebook page of conservative commentators Diamond and Silk, right-wing media outlets amplified their claims of persecution on the platform—and the duo even testified before Congress, alleging they’d tried to communicate with Facebook for months to resolve the issue, but that Facebook never contacted them.[173] However, messages obtained by conservative commentator Erick Erickson show this to be demonstrably false: Although Facebook did indeed make an enforcement error, it reached out to the pair multiple times via phone, Facebook Messenger, and multiple email addresses.[174]

There are many such examples of alleged anti-conservative bias on social media having perfectly plausible explanations. Yet critics continue pointing to debunked claims as proof bias is present. This is perhaps why 65 percent of self-described conservatives believe social media companies are censoring conservatives and their ideas, according to a poll from the conservative Media Research Center.[175] As such, James Pethokoukis at the conservative-leaning American Enterprise Institute has said the issue of bias on social media platforms has become emotional and political, noting that some right-leaning policymakers are turning it into an “emotional wedge issue” rather than actually making sound arguments for regulation.[176]

If enough lawmakers believe there is bias, they will likely enact regulations that force private companies to alter their platforms to appease those that are convinced the platforms are discriminatory. Federal Communications Commission Chairman Ajit Pai argued as much in September 2018, lamenting that the way digital platforms make decisions about how they present and moderate content is opaque, which requires policymakers to “seriously think about whether the time has come for these companies to abide by new transparency obligations.”[177] And following the September 2018 congressional hearing, the Department of Justice (DOJ) issued a statement that it was convening state attorneys general to discuss concerns these companies are “intentionally stifling the free exchange of ideas on their platforms.”[178] It is wildly inappropriate for the federal government to use the threat of law enforcement to prevent private businesses from exercising their right to determine what types of legal speech they permit on their platforms.However, competitive pressure strongly incentivizes these platforms to provide services that do not exhibit political bias, so DOJ’s concerns are irrelevant.

There is a risk such pressures could lead to platforms considering political leaning when presenting content or returning search results in order to provide a more even balance, rather than factors their users actually value and make their services useful, such as timeliness, relevance, and accuracy. That said, the reality is major Internet platforms are important channels for public communication, and the public needs to trust these platforms are not using the power that comes with it for political purposes. Many companies are already taking steps to assure more transparency. In August 2019, Facebook published interim results from an audit it commissioned by former Senator Jon Kyl (R-AZ) and the law firm Covington and Burling to study the issue.[179] The report did not present any evidence of anti-conservative bias, but did highlight changes Facebook had made to its content policies to cater to those who believed Facebook was suppressing their conservative views. For example, Facebook loosened restrictions on posting shocking or sensational content to allow for antiabortion ads showing infants born prematurely.[180] More recently, Facebook announced rules for an independent oversight board of people with diverse backgrounds, designed as a check to the company’s decision-making about controversial content.[181]

Claim #13: AI Is Inherently Biased

Bias in big data. Automated discrimination. Algorithms that erode civil liberties. These are some of the fears many have expressed about a world that allows AI to make decisions.[182] These concerns have been the subject of endless punditry, multiple congressional hearings, and, most recently, the focus of the Algorithmic Accountability Act of 2019.[183] High-profile stories about biased algorithms unfairly discriminating against women and people of color regularly make the news, keeping this issue in the spotlight. For example, in February 2018, researchers from MIT and Stanford University found that popular commercial facial-analysis systems used to detect, among other things, whether a person in a photo is male or female, had significantly higher error rates for dark-skinned women than light-skinned men.[184] And in October 2018, Amazon stopped developing an experimental hiring system that used AI to vet job applicants, after discovering it was more likely to recommend men.[185]

To be sure, there are several ways AI can make biased or unfair decisions, most notably due to the way data for use in AI systems is collected, and there is considerable reason for society to be focused on organizations and individuals making unbiased decisions to the maximum extent possible. There can be several reasons for bias in machine-learning systems. Data can be unrepresentative of reality, and thus an AI system trained on it will likely not perform consistently well in reality; or data can reflect existing real-world biases, causing an AI system to learn and perpetuate this bias in its decision-making.[186] Additionally, bias can be introduced in the preparation of data for AI, such as when an AI user, or “operator,” selects which attributes they want their algorithm to consider. For an AI system that determines creditworthiness, for example, an operator may select attributes such as customer age and income.[187]

However, this is not to say the techlash case of bias is fully valid. Indeed, many of the claims about biased AI do not hold up to scrutiny. For example, the American Civil Liberties Union (ACLU) has repeatedly published claims alleging potentially dangerous levels of inaccuracy in commercial facial-recognition technology, but misleadingly uses a confidence threshold well below the developer’s recommendation, and refuses to publish its data, thereby disingenuously portraying the technology as inaccurate and unreliable.[188] Additionally, though the story of Amazon discovering its AI hiring tool was biased against women was touted as evidence of scandalous AI-enabled discrimination by Big Tech, it was actually an example of a technology firm acting responsibly.[189] Amazon trained its system using 10 years’ worth of resumes submitted to the company. However, because these patterns indicated men were more likely to be hired, the system had learned to associate phrases in resumes indicating attendance of an all-women’s college or participation in women’s groups with less competitive applications.[190] But because Amazon had responsible controls in place, its recruiters did not rely solely on these systems when making decisions, and the company was able to identify the systems were not performing as desired, and ultimately terminated the project. This is an example of good governance, not dangerous AI running amok and discriminating.

To be sure, there is no question that AI can be biased or unfair. But the fervor surrounding this issue has caused many critics to not think critically about the actual likelihood of widespread biased AI, or how to address these challenges effectively. Rather, they have engendered support for policies that would do little to reduce bias or unfairness. For example, many have expressed support for mandating algorithmic transparency (forcing operators to expose their algorithms and information about their data to some degree of public scrutiny) or algorithmic explainability (making algorithms interpretable to end users, such as by having operators describe how their algorithms work, or by using algorithms capable of articulating the rationales for their decisions), or the creation of a new regulatory body devoted to the oversight of AI.[191] There are many reasons such proposals are flawed, but most fundamentally, they fail to recognize that antidiscrimination laws apply to decisions made by AI—just like they apply to those made by humans. Furthermore, many of these proposals place the responsibility for eliminating bias and unfairness on the developers of AI systems, rather than on their operators, who have much greater control over how these systems are used.[192]

Many of the claims about biased AI do not hold up to scrutiny.

In short, pundits frequently lament that companies will recklessly deploy AI, and appeal to the perceived neutrality of the algorithm to maximize profits at the expense of societal good.[193] However, no matter how loudly commentators argue this point, algorithms do not operate in a vacuum, and are intrinsically and inescapably linked to their operators. If a company values nondiscrimination, it will take steps to ensure it does not rely on AI systems in a way that could cause discrimination. If a company does not care about discrimination, it will simply not take steps to prevent it, regardless of whether it uses AI. Thus, blaming Big Tech for developing AI that could enable or exacerbate bias and unfairness is akin to blaming a farmer for causing food poisoning when a restaurant violates health codes. That said, there are, of course, things government can do to reduce the potential for algorithmic bias to cause harm. First, regulators should encourage AI operators to use a variety of different technical and procedural mechanisms, such as confidence intervals, procedural regularity, and impact assessments, to ensure their algorithms are operating as intended and not causing harm.[194] And second, the federal government should prioritize the development of publicly available authoritative training datasets for high-stakes AI applications, such as facial recognition. Historically, the training data available to developers overwhelmingly consists of white, male faces, causing many facial-analysis systems to underperform for minorities and women.[195] While many U.S. companies developing this technology invest heavily in it for proprietary use, overcoming this challenge should not just be the responsibility of the private sector. By creating publicly available, representative datasets for this purpose, the federal government could accelerate the development of this technology while reducing the potential for it to be biased.[196]

Claim #14: IT Is Making Us Stupid

There has been a long tradition of assuming new technology reduces human mental capabilities. In commenting on the invention of writing, Plato, without irony, wrote that individuals who read “will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”[197] Today is no different, as pundits claim that because IT lets us do things more easily—speed-dial phone numbers, get directions from our phones, etc.—technology is making us less intelligent. Nicholas Carr, author of The Glass Cage: How Our Computers Are Changing Us, summed up the sentiment, “Google is making us stupid.” Some pundits go so far as to claim use of mapping programs (e.g., Google Maps) boosts Alzheimer’s rates, even as another study shows these programs are a great help to patients with the disease.[198]

But this issue of technologies making life easier for humans, and hence making us mentally or physically lazier, doesn’t just apply to IT. Carr, for example, has complained that the development of the automatic transmission means few people now learn how to use a manual transmission—and this somehow diminishes them. It is also true that the introduction of the power lawn mower has meant that few people today are skilled at using a scythe. But not only do new technologies greatly improve our lives—it is easier to drive a car with an automatic transmission, or push an electric power lawn mower—but they enable humans to perform new functions and learn new skills, such as using a computer and navigating the Internet.

Some point to computer programs defeating chess masters as an argument that this technology reduces chess skills. But as one analyst wrote, “When Deep Blue beat Garry Kasparov, the world chess champion, in 1997, did human chess players give up trying to compare with machines? Quite the contrary: Humans have used chess programs to improve their game, and as a consequence the level of play in the world has improved.”[199] Likewise, technologies such as AI are likely to make humans smarter by allowing us to focus on things of most importance and interest. Donald Milchie, the British dean of AI research, said AI is a remedy to complexity pollution because it “is about making machines more fathomable and more under control of human beings, not less.”[200] Indeed, by helping to free humans from mundane tasks, while at the same time providing a cornucopia of information resources at a person’s fingertips, IT is providing the opportunity for significant increases in mental acuity and knowledge.

By helping to free humans from mundane tasks, while at the same time providing a cornucopia of information resources at a person’s fingertips, IT is providing the opportunity for significant increases in mental acuity and knowledge.

Tech naysayers also claim humans will let machines make decisions for them, and the result will be dumb outcomes.[201] But we have a host of technologies—such as traffic lights that have replaced traffic police officers waving their arms—that everyone agrees have resulted in better decisions. Going forward, the real question is whether there are fewer errors with computer-aided systems—or in some cases, slightly more errors, but at a significantly lower cost. If there are not, the systems will not be used. Moreover, for many decisions, especially those involving routine processing of information, machines are usually better than humans (who can be tired, biased, or otherwise faulty). This is why behavioral economist Richard Thaler wrote:

Any routine decision-making task—detecting fraud, assessing the severity of a tumor, hiring employees—is done better by a simple statistical model than by a leading expert in the field. So pardon me if I don’t lose sleep worrying about computers taking over the world. Let’s take it one step at a time, and see if people are willing to trust them to make the easy decisions at which they’re already better than humans.[202]

Claim #15: The Tech Industry Does Not Employ Enough Women or Underrepresented Minorities

Criticism of big technology firms’ lack of diversity is widespread. Indeed, one can easily find op-ed headlines such as “There’s a Diversity Problem in the Tech Industry,” and that diversity is “Silicon Valley’s Achilles’ Heel.”[203] In March 2019, former Facebook manager Mark Luckie even testified before Congress that a lack of diversity in large technology firms has led to discrimination being “built into the products” of companies.[204] And there have been many accusations that the culture of many tech firms—both large and small—is too often unfriendly toward women and minorities.

Big technology firms can do more to hire and retain diverse candidates, including by ensuring a supportive work environment. However, there is a limit to what they can do on their own. One industry-wide challenge is the lack of diversity among individuals earning degrees in computer science. This issue makes it more difficult for firms, especially small and mid-sized tech companies, to hire women and minorities.

In 2015, women accounted for just 18 percent of students earning a bachelor's degree in computer science in the United States.[205] And they earned only 30 percent and 23 percent of the masters and doctoral degrees in computer science, respectively.[206] Except for a slight increase in the percentage of master’s degree holders that are women, these rates have stayed constant since 2008. In addition, both African American and Hispanic students are also underrepresented, as computer-science-degree earners in each group account for roughly 10 percent of the students earning bachelor’s degrees in computer science.[207] Partially as a result, women, African Americans, and Hispanics make up only 19 percent, 4 percent, and 5 percent of software developers, respectively.[208] In comparison, these groups make up 47 percent, 12 percent, and 17 percent of the U.S. workforce, respectively.[209]

Both Hispanics and African Americans are underrepresented when compared with their graduation rates as computer scientists.

For the most part, big technology companies’ workforces employ more women than college graduation rates for computer scientists would suggest. For example, women comprise 23 percent of the individuals working in technology roles at Apple, Facebook, and Google.[210] Moreover, most major technology companies have active programs to hire women and minorities, which has helped increase their diversity. For example, Apple’s percentage of new hires of underrepresented minorities increased from 21 percent in 2014 to 31 percent in 2018.[211] Similarly, Facebook has increased the percentage of its senior leadership that is female from 23 percent in 2014 to 33 percent in 2019.[212] Since 2014, Apple, Facebook, and Google have also increased the percentage of their technology workforces that are Hispanic or African American. However, both Hispanics and African Americans are underrepresented when compared with their graduation rates as computer scientists.[213] This reality, as well as complaints from women and minorities about the industry’s culture, suggest firms can still do more to improve their diversity.

Policymakers should also take action. Indeed, much can and should be done to increase the opportunities for women and minorities in technology occupations, beginning with reforms in education. Policymakers should revise secondary-school curricula to ensure the availability of technology classes that focus on core concepts of computer science, while increasing the availability of AP computer science courses.[214] As of 2017, only 22 percent of high schools with Advanced Placement (AP) courses offered AP computer science courses—and access to the courses has traditionally been concentrated in affluent school districts.[215] Female, African American, and Hispanic students account for 28, 5, and 15 percent of students taking AP computer science exams, respectively.[216] Policymakers should incentivize their local universities to take actions to improve representation. For example, the University of California, Berkeley increased female enrollment in an introductory computer science course by changing the title from “Introduction to Symbolic Programming” to “Beauty and the Joy of Computing.”[217] In addition, Carnegie-Mellon University increased the proportion of its female computer science students from 7 to 42 percent between 1995 and 2000 by redesigning admissions criteria. The university also reduced the emphasis it places on prior experience in computing and increased its emphasis on other factors, such as leadership potential.[218] Finally, policymakers should provide funding for apprenticeship programs that train more women and minorities for computer science roles. For example, data science company Catalyte provides individuals five months of training, after which graduates of its program enter a two-year apprenticeship.[219]

Claim #16: IT Consumes Too Much Energy and Accelerates Climate Change

Increasingly, IT is being implicated in climate change, with claims that it uses massive amounts of energy. Indeed, recent articles have asserted that the tech industry will consume an increasing share of electricity, and thereby accelerate climate change. One researcher from the Chinese tech company Huawei warned in 2017 that a “tsunami of data” could drive Internet-connected devices to consume up to 20 percent of the world’s electricity and emit up to 5.5 percent of global carbon emissions by 2025.[220] In 2018, an article published in Nature Climate Change had the provocative title “Bitcoin emissions alone could push global warming above 2°C.”[221] Research out of the University of Massachusetts Amherst compared the emissions from developing and training natural-language processing software—the software that helps Amazon’s Alexa understand what you’re saying, and enables machine translation between languages—with the emissions of 315 roundtrip flights between New York and San Francisco.[222] And recently, an op-ed in The Guardian went so far as to claim the problem is so bad that “[t0] decarbonize, we need to decomputerize.”[223]

Such hyperbolic claims harken back to the late 1990s and early fears about the Internet. In 1999, Huber and Mills made a widely cited prediction based on faulty assumptions about energy consumption in computers and servers that “half of the electric grid will be powering the digital-Internet economy within the next decade.”[224] More recently, researchers from the Lawrence Berkeley National Laboratory found that, as of 2014, U.S. data centers accounted for only 1.8 percent of U.S. electricity consumption—a figure that has remained essentially flat since 2008 despite strong growth in data center services.[225] In fact, the International Energy Agency lists data centers and networks as 1 of only 7 key sectors—out of a total of 45 critical energy technologies and sectors—that is “on track” to meet its greenhouse gas (GHG) emissions goals and limit global warming to 2 degrees Celsius.[226]

Doomsday predictions focus on rapid growth in IT use, while not anticipating commensurate improvements in energy efficiency. But like Moore’s Law for computing power, computing efficiency—the number of computations that can be performed per kilowatt-hour of electricity—has doubled every 1.5 years.[227] Rapid efficiency improvements, combined with short lifespans and quick turnover in devices and equipment, have prevented significant growth in IT-based energy consumption. In some cases, energy-efficiency advances have proceeded more quickly. The purpose-built integrated circuits that are used to mine Bitcoin today are around one-million-times more energy efficient than the central processing units used in 2009.[228] Other companies, like venture backed Syntiant, have shown that special-built AI processors can be 100 times more energy efficient than conventional processors.[229]

Like Moore’s Law for computing power, computing efficiency—the number of computations that can be performed per kilowatt-hour of electricity—has doubled every 1.5 years.

Moreover, virtually all claims made about IT energy consumption omit consideration of the energy-intensive physical activities IT enables us to forego. For example, the aforementioned study on natural language processing makes an apples-to-oranges comparison of neuro-linguistic programming (NLP) software to transcontinental flights, and fails to account for the time, energy, and other resources saved by using NLP software. A more useful approach would be to compare the resources (time, energy, staff, and other inputs) required to perform a task (e.g., translate text from English to another language) using NLP versus hiring a translator.

In fact, IT is at the heart of many solutions that reduce fossil-fuel consumption, such as telework and teleconferencing. The 2017 State of Telecommuting in the U.S. Employee Workforce report found that 3.9 million U.S. employees, or 2.9 percent of the total U.S. workforce, work from home at least half the time, resulting in 7.8 billion vehicle miles not traveled, 19.6 million barrels of oil not consumed, and 3 million metric tons of GHG emissions avoided.[230] Similarly, video streaming provides a less-energy-intensive alternative to the manufacturing and shipping of DVDs. A recent lifecycle assessment found that shifting all DVD viewing to video streaming would reduce primary energy usage by 30 petajoules (equivalent to 8.3 billion kilowatt-hours, or enough electricity to power 200,000 U.S. households for a year) and would reduce GHG emissions by 1.9 million metric tons of carbon dioxide.[231] Other studies show e-commerce reduces lifecycle GHG emissions by replacing consumer trips to the store with optimized parcel delivery.[232]

Additionally, tech companies are often at the forefront of commitments to purchase clean energy. In 2017, Microsoft worked out a deal with its local electricity utility and Washington state regulators to withdraw from the utility’s service territory so that it could purchase cleaner electricity directly from open power markets.[233] Greenpeace has identified 20 Internet companies—including Facebook, Apple, and Google—that have made 100-percent-renewable-energy commitments. In fact, Google announced in 2017 that it had already met its goal of purchasing enough renewable energy to meet 100 percent of its global annual electricity use.[234] And more than 50 of the world’s largest tech companies are members of Green Grid, an industry consortium that works to improve IT and data-center energy efficiency by developing energy-use metrics and setting efficiency standards.[235]

Past performance is no guarantee of future improvements, but future trends look promising. For example, 4G networks, which are around 50 times more energy efficient than 2G, are going to be replaced by 5G networks—which are expected to be around 10 times more energy efficient than 4G.[236] And the data center market, which is growing rapidly in the Asia-Pacific region, could reduce energy demand by around 15 percent by 2020 with adoption of improved management practices modeled after those used in the United States.[237] None of this is to say policy cannot and should not play a supportive role. Policymakers around the world should introduce carbon pricing.[238] And programs such as ENERGY STAR for data centers help the IT industry track and improve their energy performance.[239]

Economic Issues

Claim #17: Tech Companies Don’t Pay Their Fair Share of Taxes

One core component of techlash complaints is Big Tech leads to “small tax.” In other words, big technology firms, particularly the largest Internet companies, are under-taxed. European Commissioner for Competition Margrethe Vestager has made the point that “we insist that all firms—also the digital giants—pay their fair share of taxes.”[240] Similarly, France’s Finance Minister Bruno Le Maire justified his country’s recent implementation of a digital services tax aimed primarily at American Internet giants by insisting “[t]hese giants use your personal data and make significant profit from it, without paying their fair share of tax.”[241] Some in the United States also make this claim. During a recent Democratic Party presidential debate, Senator Bernie Sanders (I-VT) claimed that Amazon did not pay any federal taxes.[242]

The European Commission attempted to give credence to this belief by alleging domestic digital companies pay an effective tax rate of only 8.5 percent, compared with 23.2 percent paid under the “traditional international business model.”[243] However, the Commission’s Regulatory Scrutiny Board ruled that it had “significant shortcomings,” concluding that the Commission’s argument did “not show the urgency for the EU to act, before global progress is achieved at the OECD/G20 level.”[244]

Moreover, at least two studies have shown that, even before the Organization for Economic Cooperation and Development’s (OECD) recent reforms, large digital companies paid higher effective tax rates (the ratio of total global taxes to their profits) than their peers in more traditional industries. One study shows foreign digital companies often pay far more in taxes than many large and well-known traditional companies based in the EU.[245] Another pointed out that digital companies often benefit from tax provisions meant to encourage research and development expenditures, which benefit society as much if not more than the companies that conduct the research.[246] Tech companies also tend to rely on equity funding, which raises their effective tax rates.

A major reason for these complaints is tech companies have large intangible assets, and often sell services through the Internet. It is much easier to transfer the location of intangible assets to low-tax jurisdictions, and much harder to objectively value their worth. And the sale of services allows a company to reach customers in another country without needing to have a permanent establishment there, thus avoiding local corporate taxes. However, in response to perceived problems with how companies are arranging their international revenues to minimize taxes, OECD recently agreed to a number of major reforms to reduce what is known as “base erosion” and profit shifting, with the goal of making sure tax transactions reflect economic reality. As a result, many companies have restructured their operations, abandoning some low-tax jurisdictions.[247] Within Europe, this is a particular problem, as a number of European countries, including Ireland and Luxembourg, have lowered corporate taxes in an attempt to increase domestic investment within their borders. Companies located there have access to the entire European Union. Rather than respond to this tax competition, other countries, including France and Germany, are attempting to force companies to declare a larger portion of profits in their jurisdictions.[248]

More recently, some nations, including many in Europe, are threatening digital services taxes on the largest tech companies. The idea behind them is users located in their countries create a large portion of the value behind these companies’ assets. Therefore, a proportional amount of the profits derived from these assets should be subject to corporate tax in their country. However, this rationale is faulty.[249] The current international tax system usually assigns tax liability to wherever value is created, not where the customers happen to be. In the case of services sold over the Internet, very little of the value creation needs to be in the countries where the product is sold. In this case, the value is taxed in the home country, often in the United States, and not in the customers’ countries. Any reform to international rules that directs more taxable profits to countries where the consumer resides should be negotiated in OECD. Unilateral changes that violate international trade agreements should be opposed.

Claim #18: IT Is Destroying Jobs

Despite U.S. labor productivity growth and unemployment rates facing near-all-time lows, the techlash blames technology for eliminating jobs. A much-ballyhooed 2013 study by Oxford University researchers Carl Benedikt Frey and Michael Osborne set the tone when it trumpeted the jarring conclusion that 47 percent of U.S. employment was at risk of job loss from new technology.[250] Silicon Valley gadfly Vivek Wadhwa has predicted that 80 to 90 percent of jobs will be eliminated by the end of the next decade. As the title of a review of a World Economic Forum study warns, “Emerging Tech Will Create More Jobs Than It Kills by 2022, World Economic Forum Predicts.”[251] A January 2019 Houston Chronicle op-ed title warns that “Automation could hollow out the American workforce.”[252] In short, tech is being implicated in causing massive unemployment that will breed Dickensian conditions requiring virtually everyone to be on the government dole (aka, “universal basic income”).

In response, a host of commentators have called for slowing the pace of technological change, or even putting on the brakes. British Labor Party Leader Jeremy Corbin, Microsoft founder Bill Gates, and San Francisco City Supervisor Jane Kim, have all called for a tax on robots.[253] New York Mayor and presidential candidate Bill DeBlasio has even called for a Federal Automation and Worker Protection Agency from which companies would be required to get a permit in order to automate.[254]

To be sure, economic evidence is clear that IT plays an important role in driving productivity growth.[255] But that is a good thing, as productivity is what enables societies to boost per capita income.[256] Moreover, the idea that technology will lead to fewer jobs is simply not borne out by the evidence.[257] While for hundreds of years technology has eliminated jobs (e.g., buggy-whip makers), it has also created new jobs (e.g., automobile mechanics) and boosted living standards, which have resulted in more demand for workers doing the same tasks (building houses, educating people, selling goods, etc.). If technology had not eliminated certain jobs (e.g., farming), our living standards would be no higher than that of 19th-century Americans.

In addition, America’s most productive years have been followed by years of lowest unemployment. The McKinsey Global Institute looked at annual employment and productivity change from 1929 to 2009 and found that increases in productivity were correlated with increases—not declines—in subsequent employment growth.[258]

The techlash framing suffers from what economists call the “lump of labor fallacy”: the idea that there is a limited amount of work to be done, and if a job is eliminated, it’s gone for good. But this is a false reading of the process of technological change because it fails to include second-order effects whereby the savings from increased productivity are recycled into the economy in the form of higher wages, higher profits, and reduced prices to create new demand that in turn creates other jobs. This is why most scholarly studies find no negative effect on employment—and some have even found a positive relationship, with increases in productivity leading to more jobs. An OECD study sums it up, “Historically, the income-generating effects of new technologies have proved more powerful than the labor-displacing effects: technological progress has been accompanied not only by higher output and productivity, but also by higher overall employment.”[259]

While technology is the key driver of increased incomes, it does not mean we don’t have enough jobs. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.

Moreover, these apocalyptic estimates of job loss have been shown to be significantly overstated. As McKinsey concluded, “Very few occupations will be automated in their entirety in the near or medium term. Rather, certain activities are more likely to be automated, requiring entire business processes to be transformed, and jobs performed by people to be redefined.”[260] In other words, technology will lead more to job redefinitions and opportunities to add more value than to outright job destruction. Moreover, in many cases, technology creates jobs. Many studies show firms that adopt robots end up creating more jobs, in part because they gain market share.[261]

So while technology is the key driver of increased incomes, it does not mean we don’t have enough jobs. We can rest easy that in 25 years the unemployment and labor force participation rates will be similar to today’s.

Claim #19: Tech Is Reducing Labor’s Share of Income[262]

Tech is also accused of immiserating works, in particular by reducing workers’ share of national income. For example, in an article for Fortune, Geoff Colvin argued that automation has caused the decline in labor’s share of national income.[263] Carl Frey (the coauthor of the famous 47-percent-job-loss study) agreed, saying technology has enabled corporate profits to rise at the expense of labor.[264] Allegedly, this is because, over the past decade, most automation has taken over tasks workers used to perform, whereas relatively little automation has created new tasks that open up opportunities for humans to do new, more productive work.

There are two problems with this argument. First, a large body of economic evidence shows capital investment, including automation, raises national income over the medium term. Although automation can displace workers, this effect has always resulted in higher incomes, lower prices, and more choices for the majority of society.

Second, Colvin, Frey, and many others simply assume that because the labor share of income has gone down, automation must be the culprit. But when we look at net income, labor’s share, since the federal government starting collecting this data in the 1930s, has remained roughly constant at around 70 percent. Moreover, the relatively small recent decline is almost entirely explained by the rise in housing costs. From 2006 to 2017, labor’s share of income fell by 0.25 percentage points, while the share going to profits was unchanged. Rental income (which includes both actual rents and imputed rents to homeowners), however, rose by 2.4 percentage points. The rise in rental income, in turn, was due mainly to local restrictions on building more units, not automation.

In summary, automation has always delivered vast benefits to society. Even in the midst of recessions, no one proposes doing away with existing automation in order to create jobs. Just as we would not ban backhoes and mandate workers use only shovels, we should embrace, not resist, the continued march of other forms of automation.

Claim #20: Tech Increases Income Inequality

Some skeptics also argue that technology is increasing income inequality and, in the future, could lead to the immiseration of millions of workers, absent a universal basic income.

But there is little evidence or logic to believe increased automation—from robots, AI, or any other new tool—will lead to an increase in inequality. As the Economic Policy Institute found, inequality did not increase as a result of jobs in middle-wage occupations being eliminated by productivity gains.[265] Rather, virtually all of the increases were within occupations, with some individuals making winner-take-all incomes at the expense of other workers within the same occupation. In short, inequality is not caused by robots; it is caused by a small share of the 0.1-percenters gaining an increasing share of national income.

Some believe that, going forward, technology will boost inequality, assuming the lion’s share of the savings from automation technology is captured by “capital,” and little goes to labor in the form of either higher wages or lower prices. This is not only illogical, but history suggests it is wrong. The only way capitalists can capture the majority of the gains from automation is if there is limited competition in the market that allows them to capture most or all of the savings as profits. If this is true, then why, over the last 40 years, when labor productivity has more than doubled, are corporate profits essentially the same? The answer is competitive markets limit the ability of companies to capture most of the gains from productivity as profits, especially over the medium to long term. Moreover, no one has made a convincing case there is anything about the next production system that would lead to massive monopolization of the global economy in virtually all sectors. Competition, especially backed up by national antitrust authorities, is not likely to die.

Others claim inequality will increase because unemployment will rise. But productivity leads to lower prices, which leads to increased demand and, therefore, restores labor demand. This is why, as the U.S. Bureau of Labor Statistics has found, when firms reduce costs through automation, those savings raise wages, lower prices, or both.[266] Likewise, Graetz and Michaels, in a review of the economic impact of industrial robots across 17 countries, found that robots increase wages while having no significant effect on total hours worked.[267]

Still others look at wealthy tech CEOs and argue the industry is boosting inequality. To be sure, compensation of CEOs, tech or otherwise, is too high, and a higher marginal income tax rate would be a welcome tool to help reduce after-tax inequality. However, it’s important to note that when it comes to the one-percenters, tech is under-represented. As Gallop economist Jonathan Rothwell noted, “There are five times as many top 1 percent workers in dental services as in software services.”[268] Likewise, Steven N. Kaplan and Joshua Rauh found, “In 2004, the 25 highest paid hedge fund managers combined earned more than all five hundred S&P 500 CEOs combined.”[269]

Claim #21: Tech Is Creating Monopolies[270]

Perhaps the most commonly cited techlash complaint is tech companies are monopolies, and that this is hurting the economy. Tim Wu, author of the book, The Curse of Bigness, wrote that Facebook is the poster child for the curse of bigness, Google destroys all competitors, and Amazon will be the only company selling online.[271] Barry Lynn, executive director of the Open Markets Institute, stated, “The world is going to be better off after we break up these [tech] companies.”[272] Robert VerBruggen, in the title of his article for the conservative National Review, called Google, Facebook, and Amazon “Our Digital Overlords.”[273]

These kinds of claims have led some elected officials to want to take action. Democratic FTC Commissioner Rohit Chopra stated, “We actually have to take a hard look at whether these behemoths are killing off innovation and competition.”[274] Sen. Elizabeth Warren (D-MA) has campaigned for the Democratic presidential nomination on a pledge to “break up” Big Tech.[275]

To be sure, many tech firms are large, and have earned significant market shares. But big firms, tech or otherwise, create enormous benefits that are too often overlooked, including higher-wage jobs with better benefits than small companies, more exports, and more innovation.[276] Moreover, when it comes to tech, big firms are big precisely because scale holds the key to maximizing consumer welfare. As the Obama administration’s Council of Economic Advisers noted, “Some newer technology markets are also characterized by network effects, with large positive spillovers from having many consumers use the same product. Markets in which network effects are important, such as social media sites, may come to be dominated by one firm.”[277]

Plus, if advocates are going to make charges of monopoly, they should at least correctly define the relevant market. For free, ad-supported services, that market is advertising—and here digital leaders have comparatively little power. Consider that Google and Facebook together hold just 25 percent of the ad market. No self-respecting antitrust economist would call such a market anything but competitive, especially as eyeballs could wander (and profits erode) when the next shiny new thing appears. Meanwhile, for many tech companies that make money by selling products and services, such as Amazon, the prices are low, and convenience high—a big reason their market shares have grown.

Moreover, Big Tech is vulnerable to competition, whether from adjacent markets, new entrants, or foreign competitors. As antitrust experts Carl Shapiro and Hal Varian put it, “The information economy is populated by temporary, or fragile, monopolies. Hardware and software firms vie for dominance, knowing that today’s leading technology or architecture will, more likely than not, be toppled in short order by an upstart with superior technology.”[278]

It is all-too easy to forget erstwhile tech giants such as IBM, Dell, and Microsoft were once seen as near invincible—and today’s giants are even more vulnerable. Why? Because compared with past technology leaders, there’s much less to keep customers from switching when a more compelling innovation emerges. And because they face formidable foreign competition, many of which, at least in the case of China, is backed by their host governments.

Rather than worry about hypothetical harms, governments should let consumers reap the windfall of the gains Big Tech companies are creating today. Most won’t be dominant long enough for any downsides to materialize anyway.

Claim #22: Big Tech Is Hurting Start-Ups

We constantly hear the refrain that tech companies are hurting start-ups. New York Times economic columnist Eduardo Porter wrote that the decline in start-ups “is all about the decline of competition.”[279] This echoes antitrust crusaders Barry Lynn and Lina Khan, who’ve argued, “The single biggest factor driving down entrepreneurship is precisely the radical concentration of power we have seen not only in the banking industry but throughout the U.S. economy over the last 30 years.”[280] For many, tech concentration is at the heart of the problem.

But in fact, there is little to no relationship between the growth of industry concentration and the rate of change in start-ups. For example, in the catchall industry sector the Census calls “other services” (which covers everything from equipment and machine repair to personal care), start-ups fell by 24 percent from 2003 to 2011, with the biggest 8 firms in the industry actually losing market share over the same period. Meanwhile, in “wholesale trade” and “arts, entertainment and recreation,” start-ups declined 16 percent and 14 percent, respectively—but there were no changes in the market shares held by the sectors’ biggest 8 firms.[281]

To be sure, in some sectors where technologyhas meant larger firms can more efficiently serve the market, start-ups have fallen. We see this particularly in retail, where start-ups fell 16 percent. But this was not because large firms abused their market power. Rather, technologies such as software-enabled logistics systems and web-based e-commerce enabled the average retail firm to get larger, meaning there was less market space for start-ups that lacked something truly unique to offer. Why open a local hardware store when stores such as Home Depot and Lowes are so ubiquitous and offer much lower prices and vastly more choice?

Some, such as Ian Hathaway, argue Big Tech is so dominant that “most VCs won’t touch start-ups operating anywhere near these companies’ orbits, a phenomenon that is apparently so common it’s been given a nickname: kill-zones.”[282] However, a study by Oliver Wyman, and funded by Facebook, found that the presence of Facebook, Google, and Amazon has no negative impact on the venture capital (VC) market in tech sectors.[283] In fact, they found that VC in the tech sector is growing faster than in most other sectors. However, Hathaway rightly noted that a more accurate assessment would look at VC investments only in those narrower tech sectors wherein Facebook, Google, and Amazon operate: Internet software, social/platform software, and Internet retail. He concluded that, in doing so “in recent years” (i.e., the last three), they are having a negative impact on VC investment in all three areas.[284] But “recent years” is key because in the last eight years, two sectors (social/platform software and Internet retail areas) saw VC investment at or almost at overall VC growth.

But more importantly, this critique fetishizes VC investment and start-ups. Few complained after the Great Depression that, compared with the 1910s and 1920s, automobile-sector start-ups declined precipitously. By the 1930s, it made little sense to invests in new automobile companies when it was clear the technology system (internal combustion engine) and major players (American Motors, Chrysler, Ford, and GM) had already been established. VC funding in this industry would have represented a waste of societal resources. Today is no different. The technology and business models for search, social networks, and Internet retailing are relatively mature; society is better off if entrepreneurs and venture capitalists focus on other areas. Indeed, to the extent investors may be focusing their capital outside a few areas wherein large firms have established positions in what are somewhat mature technologies, it is arguably a good thing because it means there is more capital for other promising areas. Hathaway, in fact, acknowledged this possibility that “venture capital investment may have increased in non-tech sectors too, so that the tech giants have simply diverted the flow of capital to other areas.” If so, this is a plus, not a negative.

The technology and business models for search, social networks, and Internet retailing are relatively mature; society is better off if entrepreneurs and venture capitalists focus on other areas.

In short, the point of venture capital and entrepreneurship is to find new opportunities to support high-growth innovation-based start-ups—and when we look there, things are healthy. When MIT professors Jorge Guzman and Scott Stern looked at trends in tech-based, high-growth entrepreneurship for 15 large states from 1988 to 2014, they found that even after controlling for the size of the U.S. economy, the second-highest rate of high-growth entrepreneurship occurred in 2014.[285] And when ITIF examined data on more than 5 million technology-based start-ups in the United States, it found the number had grown 47 percent over the last decade. They also found that from 2007 to 2015, software start-ups increased 20 percent. There were more software firms in 2016 than in 2007. And the 5-year survival rate in 2011 was 17 percentage points higher than in 1999.[286] In short, there is no evidence Big Tech has hurt tech innovation in start-ups.

Getting to a New Acceptance: Not Tech as Savior, Not Tech as Enemy, but Tech as a Valuable Tool

We should not go back to the naïve utopian era of IT as savior. We should instead critically examine the impact of new technology to help maximize its value and limit harms. As IT has matured and new innovations emerge, new issues have arisen—as they have historically with all technologies. When the automobile was first developed, the general feeling was one of excitement: Finally, humans had much better transportation options. But issues of safety, pollution, and congestion arose. The answer was not, at least for most Americans, to demonize the auto and the “big three” automakers; it was to call for the appropriate policy responses to address the problems (pollution control, safer cars and roads, etc.) while still enabling auto industry competitiveness, innovation, and use. Going forward, that should be the model for tech, wherein policymakers understand that most Americans see tech as an integral and valuable part of their lives, and want continued innovation and improvements—but that where there are challenges and issues, government acts appropriately in ways that address the challenges with the least possible harm to U.S. competitiveness, innovation, or consumer welfare.

About the Authors

Robert D. Atkinson is the founder and president of ITIF.

Doug Brake directs ITIF’s work on broadband and spectrum policy.

Daniel Castro is vice president at ITIF and director of ITIF's Center for Data Innovation.

Colin Cunliff is a senior policy analyst at ITIF focused on clean energy innovation.

Joe Kennedy is a senior fellow at ITIF focused on economic policy.

Michael McLaughlin is a research analyst at ITIF.

Alan McQuinn was a senior policy analyst at ITIF.

Josh New was a senior policy analyst at ITIF’s Center for Data Innovation.

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent 501(c)(3) nonprofit, nonpartisan research and educational institute that has been recognized repeatedly as the world’s leading think tank for science and technology policy. Its mission is to formulate, evaluate, and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit itif.org/about.

Endnotes

[1].       Brad Smith, Tools and Weapons, (New York: Penguin Press, 2019).

[2].       Doug Allen and Daniel Castro, “Why So Sad? A Look at the Change in Tone of Technology Reporting From 1986 to 2013” (Information Technology and Innovation Foundation, February 2017), http://www2.itif.org/2017-why-so-sad.pdf.

[3].       Gadi Wolfsfeld, Elad Segev, and Tamir Sheafer, “Social Media and the Arab Spring: Politics Come First,” The International Journal of Press/Politics, 18(2) 115-137 (2013), DOI: 10.1177/1940161212471716;

           Sahar Khamis, Paul B. Gold, and Katherine Vaughn, “Beyond Egypt’s ‘Facebook Revolution’ and Syria’s ‘YouTube Uprising:’ Comparing Political Contexts, Actors and Communication Strategiesf [sic],” Arab Media & Society, March 28, 2012, https://www.arabmediasociety.com/beyond-egypts-facebook-revolution-and-syrias-youtube-uprising-comparing-political-contexts-actors-and-communication-strategies/.

[4].       Josh Halliday and Matthew Weaver, “Facebook’s Mark Zuckerberg named Time magazine’s person of the year,” The Guardian, December 15, 2010, https://www.theguardian.com/technology/2010/dec/15/mark-zuckerberg-time-person-of-the-year.

[5].       Farhad Manjoo, “How Netflix is Killing Privacy,” Slate, July 26, 2011, https://slate.com/technology/2011/07/netflix-streaming-is-killing-piracy.html.

[6].       Steve Kovach, “How To Use Europe’s Amazing Free Music Service Spotify In the US,” Business Insider, January 4, 2011, https://www.businessinsider.com/how-to-use-spotify-in-the-us-2011-1?r=US&IR=T.

[7].       Nicholas Carlson, “Google: Where a Genius Feels Average,” Business Insider, February 1, 2010, https://www.businessinsider.com/google-where-a-genius-feels-average-2010-2?r=US&IR=T; Christopher Null, “Twenty tech geniuses that changed the world,” itbusiness.ca, May 21, 2008, https://www.itbusiness.ca/news/twenty-tech-geniuses-that-changed-the-world/2241.

[8].       Mark Memmott, “The Word For Steve Jobs: Visionary,” National Public Radio, October 6, 2011, https://www.npr.org/sections/thetwo-way/2011/10/06/141105015/the-word-for-steve-jobs-visionary?t=1566478135752; “Apple’s ‘magical’ iPhone unveiled,” BBC News, January 9, 2007, http://news.bbc.co.uk/2/hi/technology/6246063.stm.

[9].       Purvaja Sawant, “E-shopping made easy,” Times of India, October 24, 2014, https://timesofindia.indiatimes.com/life-style/home-garden/E-shopping-made-easy/articleshow/39545171.cms.

[10].     Douglas Belkin and Caroline Porter, “Job Market Embraces Massive Online Courses,” The Wall Street Journal, September 26, 2013, https://www.wsj.com/articles/no-headline-available-1380222900.

[11].     Coy Christmas, “Living in the Post-Internet World: How Technology has Liberated Us from the Network,” Business 2 Community, August 25, 2015, https://www.business2community.com/tech-gadgets/living-post-internet-world-technology-liberated-us-network-01311600.

[12].     Allen and Castro, “Why So Sad? A Look at the Change in Tone of Technology Reporting From 1986 to 2013.”

[13].     Scott Galloway, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine,” Esquire, February 8, 2018, https://www.esquire.com/news-politics/a15895746/bust-big-tech-silicon-valley/.

[14].     Robert D. Atkinson, “Stick to cars and rockets, Elon,” Fox Business, November 27, 2018, https://www.foxbusiness.com/business-leaders/stick-to-cars-and-rockets-elon.

[15].     Kevin J. Delaney, “The robots that take your job should pay taxes, says Bill Gates,” Quartz, February 17, 2017, https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/.

[16].     Carl Bededikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerisation,” Oxford University¸ September 17, 2013, https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.

[17].     Rob Walker, “There Is No Tech Backlash,” The New York Times, September 14, 2019, https://www.nytimes.com/2019/09/14/opinion/tech-backlash.html.

[18].     “Social Media Fact Sheet,” Pew Research Center, June 12, 2019, https://www.pewinternet.org/fact-sheet/social-media/.

[19].     Carrol Doherty and Jocelyn Kiley, “Americans have become much less positive about tech companies’ impact on the U.S.,” Pew Research Center, July 29, 2019, https://www.pewresearch.org/fact-tank/2019/07/29/americans-have-become-much-less-positive-about-tech-companies-impact-on-the-u-s/.

[20].     Ibid.

[22].     Ibid.

[23].     Sanjay Nair, “Trust in Tech is Wavering and Companies Must Act,” Edelman, April 8, 2019, https://www.edelman.com/research/2019-trust-tech-wavering-companies-must-act.

[24].     Justin McCarthy, “Big Pharma Sinks to the Bottom of the U.S. Industry Ranking,” Gallup, September 3, 2019, https://news.gallup.com/poll/266060/big-pharma-sinks-bottom-industry-rankings.aspx.

[25].     Kim Hart, “Exclusive, Public Wants Big Tech Regulated,” Axios, February 28, 2018, https://www.axios.com/axios-surveymonkey-public-wants-big-tech-regulated-5f60af4b-4faa-4f45-bc45-018c5d2b360f.html.

[26].     Ibid.

[27].     Sam Sabin, “In Washington, Cracking Down on Big Tech Is Popular. In the Rest of the U.S., Not So Much,” Morning Consult, September 18, 2019, https://morningconsult.com/2019/09/18/in-washington-cracking-down-on-big-tech-is-popular-in-the-rest-of-u-s-not-so-much/.

[28].     Allen and Castro, “Why So Sad? A Look at the Change in Tone of Technology Reporting From 1986 to 2013.”

[29].     Robert B. Reich, “Big Tech Has Become Way Too Powerful,” The New York Times, September 19, 2015, https://www.nytimes.com/2015/09/20/opinion/is-big-tech-too-powerful-ask-google.html.

[30].     Robert VerBruggen, “Google, Facebook, Amazon: Our Digital Overlords,” National Review, December 12, 2017, https://www.nationalreview.com/2017/12/google-facebook-amazon-big-tech-becoming-problem/.

[31].     Eleanor Clift, “Bill Galston and Bill Kristol’s New Center Project Takes Aim at the Tech Oligarchs,” Daily Beast, September 11, 2017, https://www.thedailybeast.com/bill-galston-and-bill-kristols-new-center-project-takes-aim-at-the-tech-oligarchs.

[32].     One British author, who it should be noted is not an expert in neurology, claimed—presumably to help promote his book sales—that the increased use of mapping apps, such as Google Maps, would cause a worldwide increase in early onset of Alzheimer’s because we would no longer be adequately exercising our brain as we did when we had to navigate by paper map. (Sarah Knapton, “Google Maps Increases Risk of Developing Alzheimer’s, Expert Warns,” The Telegraph, May 29, 2019, https://www.telegraph.co.uk/science/2019/05/29/google-maps-increases-risk-developingalzheimers-expert-warns/.) The U.S. Alzheimer’s Association has not made this case and could find no research to support it (email exchange with the Association, September, 2019).

[33].     The Pessimists Archive, https://pessimists.co/.

[34].     “Red Flag Traffic Laws,” Wikipedia, September 2, 2019, https://en.wikipedia.org/wiki/Red_flag_traffic_laws.

[35].     Jacques Bughin et al., “Notes from the AI frontier: Modeling the impact of AI on the world economy,” McKinsey Global Institute, September 2018, https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.

[36].     Bitkom, “Bitkom draws mixed annual balance sheet for DS-GVO,” news release, May 16, 019, https://www.bitkom.org/Presse/Presseinformation/Bitkom-zieht-gemischte-Jahresbilanz-zur-DS-GVO.

[37].     Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile Books: London, 2018).

[38].     John Naughton, “'The goal is to automate us': welcome to the age of surveillance capitalism,” The Guardian, September 17, 2019, https://www.theguardian.com/technology/2019/jan/20/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook.

[39].     Ben Gilbert, “The #DeleteFacebook Movement is a Strong Reminder that None of These 'Free' Services Are Truly Free,” Business Insider, March 26, 2018, accessed September 16, 2019, https://www.businessinsider.com/facebook-free-services-deletefacebook-2018-3.

[40].     Alan McQuinn and Daniel Castro, “A Grand Bargain on Data Privacy Legislation for America” (Information Technology and Innovation Foundation, January 2019), http://www2.itif.org/2019-grand-bargain-privacy.pdf.

[42].     Farhad Manjoo, “I Visited 47 Sites. Hundreds of Trackers Followed Me.” The New York Times, 2019, accessed September 17, 2019, https://www.nytimes.com/interactive/2019/08/23/opinion/data-internet-privacy-tracking.html.

[43].     “Privacy Fundamentalism,” Stratechery, August 27, 2019, https://stratechery.com/2019/privacy-fundamentalism/.

[44].     Daniel Castro and Michael McLaughlin, “Survey: Few Americans Willing to Pay for Privacy,” Center for Data Innovation, January 16, 2019, accessed September 17, 2019, https://www.datainnovation.org/2019/01/survey-few-americans-willing-to-pay-for-privacy/.

[45].     “DuckDuckGo,” accessed September 17, 2019, https://duckduckgo.com/.

[46].     What Information Do Data Brokers Have on Consumers, and How Do They Use It?, testimony of Pam Dixon, executive director, World Privacy Forum before the Senate Committee on Commerce, Science, and Transportation, December 18, 2019, http://www.worldprivacyforum.org/wp-content/uploads/2013/12/WPF_PamDixon_CongressionalTestimony_DataBrokers_2013_fs.pdf.

[47].     McQuinn and Castro, “A Grand Bargain on Data Privacy Legislation for America.”

[48].     Ibid.

[49].     Daniel Castro and Alan McQuinn, “No, internet companies shouldn’t have to pay you for your data,” The Sacramento Bee, March 14, 2019, https://www.sacbee.com/opinion/op-ed/article227760249.html; Eline Chivot, “Paying Users for Their Data Would Exacerbate Digital Inequality,” Center for Data Innovation, January 11, 2019, accessed September 16, 2019, https://www.datainnovation.org/2019/01/paying-users-for-their-data-would-exacerbate-digital-inequality/.

[50].     ITIF, “Warner-Hawley Bill Gets ‘Paying’ With Data Wrong, Says Leading Tech Policy Think Tank,” news release, June 24, 2019, https://itif.org/publications/2019/06/24/warner-hawley-bill-gets-paying-data-wrong.

[51].     Christopher Rees, “Tomorrow’s Privacy: Personal Information as Property,” International Data Privacy Law, vol. 3, iss. 4, November 2013, 220–221, https://academic.oup.com/idpl/article-abstract/3/4/220/727226?redirectedFrom=PDF.

[52].     Eduardo Porter, “Your Data Is Crucial to a Robotic Age. Shouldn’t You Be Paid For It?” The New York Times, March 6, 2018, https://www.nytimes.com/2018/03/06/business/economy/user-data-pay.html.

[53].     “ITIF Summer Reading List 2013,” news release, June 7, 2013, https://itif.org/publications/2013/06/07/itif-summer-reading-list-2013.

[54].     Gavin Newsom, “Governor Newsom Delivers State of the State Address,” Office of Governor Gavin Newsom, February 12, 2019, accessed September 16, 2019, https://www.gov.ca.gov/2019/02/12/state-of-the-state-address/.

[55].     Daniel Castro and Alan McQuinn, “No, Internet Companies Shouldn’t Have to Pay You for Your Data,” Sacramento Bee, March 14, 2019, accessed September 16, 2019, https://www.sacbee.com/opinion/op-ed/article227760249.html.

[56].     Chivot, “Paying Users for Their Data Would Exacerbate Digital Inequality.”

[57].     Zeynep Tufekci, “Mark Zuckerberg, Let Me Pay for Facebook,” The New York Times, June 4, 2015, https://www.nytimes.com/2015/06/04/opinion/zeynep-tufekci-mark-zuckerberg-let-me-pay-for-facebook.html.

[58].     “U.S. Senators Take Aim at Big Tech’s ‘Dark Patterns,’” Bloomberg News, April 9, 2019, https://adage.com/article/digital/us-senators-take-aim-big-techs-dark-patterns/2163701.

[59].     Mike Allen, “Sean Parker unloads on Facebook: ‘God only knows what it’s doing to our children’s brains,’” Axios, November 9, 2017, https://www.axios.com/sean-parker-unloads-on-facebook-god-only-knows-what-its-doing-to-our-childrens-brains-1513306792-f855e7b4-4e99-4d60-8d51-2775559c2671.html.

[60].     The Office of Senator Deb Fischer, “Senators Introduce Bipartisan Legislation to Ban Manipulative ‘Dark Patterns,’” news release, April 9, 2019, https://www.fischer.senate.gov/public/index.cfm/2019/4/senators-introduce-bipartisan-legislation-to-ban-manipulative-dark-patterns.

[61].     “What are Dark Patterns? (And Why You Shouldn’t Use Them),” Design Shack, accessed October 2019, https://designshack.net/articles/ux-design/dark-patterns/.

[62].     Deceptive Experiences To Online Users Reduction Act, S.1084, 116 Cong. (2019).

[63].     Mark Sullivan, “These are the deceptive design tricks and dark patterns that steer your clicks each day,” Fast Company, June 25, 2019, https://www.fastcompany.com/90369183/deceptive-design-tricks-and-dark-patterns-that-steer-your-clicks.

[64].     The Pessimists Archive, https://pessimists.co/novel/.

[65].     Lily Rothman, “The Scathing Speech That Made Television History,” Time Magazine, May 9, 2016, https://time.com/4315217/newton-minow-vast-wasteland-1961-speech/.

[66].     “Study Sees Rise in Narcissism Among Students,” NPR, February 27, 2007, https://www.npr.org/templates/story/story.php?storyId=7618722.

[67].     Jean M. Twenge, “Have Smartphones Destroyed a Generation?,” The Atlantic Magazine, September 2017, https://www.theatlantic.com/magazine/archive/2017/09/has-the-smartphone-destroyed-a-generation/534198/.

[68].     Ibid.

[69].     Monica Anderson, “A Majority of Teens Have Experienced Some Form of Cyberbullying,” Pew Research Center, September 27, 2018, https://www.pewinternet.org/2018/09/27/a-majority-of-teens-have-experienced-some-form-of-cyberbullying/.

[70].     Shamard Charles, M.D., “Social media linked to rise in mental health disorders in teens, survey finds,” NBC News, March 14, 2019, https://www.nbcnews.com/health/mental-health/social-media-linked-rise-mental-health-disorders-teens-survey-finds-n982526.

[71].     Ben Lovejoy, “Facebook announces Screen Time style tools for main app and Instagram,” August 1, 2018, https://9to5mac.com/2018/08/01/facebook-screen-time-limits/.

[72].     Monica Anderson, “A Majority of Teens Have Experienced Some Form of Cyberbullying.”

[73].     Katy Steinmetz, “Inside Instagram’s War on Bullying,” Time Magazine, July 8, 2019, https://time.com/5619999/instagram-mosseri-bullying-artificial-intelligence/.

[74].     “Laws, Policies, & Regulations,” Stopbullying.gov, https://www.stopbullying.gov/laws/index.html.

[75].     Levi Boxell, Matthew Gentzkow, and Jesse M. Shapiro, “Is the internet causing polarization? Evidence from demographics,” (Brown University, March 2017), https://www.brown.edu/Research/Shapiro/pdfs/age-polars.pdf.

[76].     Cass R. Sunstein, A Constitution of Many Minds, (New Jersey: Princeton University Press, 2011).

[78].     Mostafa M. El-Bermawy, “Your Filter Bubble is Destroying Democracy,” Wired, November 18, 2016, https://www.wired.com/2016/11/filter-bubble-destroying-democracy/.

[79].     Seth Flaxman, Sharad Goel, Justin M. Rao, “Filter Bubbles, Echo Chambers, and Online News Consumption,” Public Opinion Quarterly, vol, 80, 2016, 298-320.

[80].     Boxell, Gentzkow, and Shapiro, “Is the internet causing political polarization? Evidence from demographics.”

[81].     Pablo Barbera, “How Social Media Reduces Mass Political Polarization. Evidence from Germany, Spain, and the U.S.” (paper prepared for the 2015 ASPA Conference).

[82].     Ibid.

[83].     Nicholas T. Davis, and Johanna L. Dunaway, “Party Polarization, Media Choice, and Mass Partisan-Ideological Sorting,” Public Opinion Quarterly, vol. 80, iss. S1, 2016, 272–297, https://doi.org/10.1093/poq/nfw002.

[84].     Flaxman Goel, and Rao, “Filter Bubbles, Echo Chambers, and Online News Consumption.”

[85].     Farhad Manjoo, “How Black People Use Twitter,” Slate, August 10, 2019, https://slate.com/technology/2010/08/how-black-people-use-twitter.html.

[86].     Laura Hazard Owen, “Few people are actually trapped in filter bubbles. Why do they like to say that they are?” NeimanLab, December 7, 2018, https://www.niemanlab.org/2018/12/few-people-are-actually-trapped-in-filter-bubbles-why-do-they-like-to-say-that-they-are/.

[87].     Emily Dreyfuss, “Coders Think They Can Burst Your Filter Bubble With Tech,” Wired, November 19, 2016, https://www.wired.com/2016/11/coders-think-can-burst-filter-bubble-tech/.

[88].     Magdalena Wojcieszak, “‘Don’t talk to me’: effects of ideologically homogeneous online groups and politically dissimilar offline ties on extremism,” New Media and Society, 2010, vol. 12, iss. 4, 637–655, https://escholarship.org/content/qt55m2w3g4/qt55m2w3g4.pdf.

[89].     Maura Conway, “Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research,” Studies in Conflict and Terrorism, February 2016, vol. 40, iss. 1, 77-98, https://www.tandfonline.com/doi/full/10.1080/1057610X.2016.1157408.

[90].     Ines Von Behr et al., “Radicalization and the digital era,” Rand Europe, 2013, https://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.sum.pdf.

[91].     Ibid.

[92].     Rachel Hatzipanagos, “How Online Hate Turns into Real-Life Violence,” Washington Post, November 30, 2018, accessed September 17, 2019, https://www.washingtonpost.com/nation/2018/11/30/how-online-hate-speech-is-fueling-real-life-violence/.

[93].     “Our Ongoing Work to Tackle Hate,” YouTube, June 5, 2019, accessed September 17, 2019, https://youtube.googleblog.com/2019/06/our-ongoing-work-to-tackle-hate.html.

[94].     “YouTube keeps deleting evidence of Syrian chemical weapon attacks,” Wired, June 26, 2018, https://www.wired.co.uk/article/chemical-weapons-in-syria-youtube-algorithm-delete-video

[95].     “Facebook's AI wipes terrorism-related posts,” BBC, November 2017, accessed September 17, 2019, http://www.bbc.com/news/technology-42158045.

[96].     47 U.S.C. § 230.

[97].     David Ibsen, “CEP Commends Speaker Pelosi for Waving in ‘New Era’ for Tech Industry Regulation” Counter Extremism Project, press release, April 12, 2019, https://www.counterextremism.com/press/cep-commends-speaker-pelosi-waving-%E2%80%9Cnew-era%E2%80%9D-tech-industry-regulation.

[98].     Makena Kelly, “Kamala Harris vows to hold social media platforms responsible for ‘hate’,” The Verge, May 6, 2019, accessed September 17, 2019, https://www.theverge.com/2019/5/6/18531181/kamala-harris-social-media-democratic-primary-2020-president.

[99].     Elizabeth Nolan Brown, “Section 230 Is the Internet's First Amendment. Now Both Republicans and Democrats Want To Take It Away.” Reason, July 29, 2019, accessed September 17, 2019, https://reason.com/2019/07/29/section-230-is-the-internets-first-amendment-now-both-republicans-and-democrats-want-to-take-it-away/.

[100].   Olivia Solon, “To censor or sanction extreme content? Either way, Facebook can't win,” The Guardian, May 2017, accessed September 17, 2019, https://www.theguardian.com/news/2017/may/22/facebook-moderator-guidelines-extreme-content-analysis.

[101].   Shelly Banjo, “Facebook, Twitter and the Digital Disinformation Mess,” Washington Post, September 27, 2019, https://www.washingtonpost.com/business/facebook-twitter-and-the-digital-disinformation-mess/2019/09/26/9a38b6b4-e0c3-11e9-be7f-4cc85017c36f_story.html.

[102].   The Pessimists Archive, “The Telegraph,” podcast, accessed October 2019, https://pessimists.co/telegraph/.

[103].   “Background to ‘Assessing Russian Activities and Intentions in Recent US Elections’: The Analytic Process and Cyber Incident Attribution,” Director of National Intelligence, January 6, 2017, https://www.dni.gov/files/documents/ICA_2017_01.pdf.

[104].   Douglas Guilbeault and Samuel Woolley, “How Twitter Bots Are Shaping the Election,” The Atlantic, November 1, 2016, https://www.theatlantic.com/technology/archive/2016/11/election-bots/506072/; “Twitter bots manipulating stock markets as fake news spreads to finance,” The Telegram, March 31, 2018, https://www.telegraph.co.uk/business/2018/03/31/twitter-bots-manipulating-stock-markets-fake-news-spreads-finance/.

[105].   Colin Stretch, “Facebook to Provide Congress With Ads Linked to Internet Research Agency,” Facebook, September 21, 2017, https://newsroom.fb.com/news/2017/09/providing-congress-with-ads-linked-to-internet-research-agency/; Elliot Schrage, “Hard Questions: Russian Ads Delivered to Congress,” Facebook, October 2, 2017, https://newsroom.fb.com/news/2017/10/hard-questions-russian-ads-delivered-to-congress/.

[106].   Ibid.

[107].   Ibid.

[108].   Stefan Wofcik et al., “Bots in the Twittersphere,” Pew Research Center, April 9, 2018, https://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/.

[109].   Chengcheng Shao et al., “The Spread of Fake News by Social Bots” https://arxiv.org/pdf/1707.07592.pdf, p. 1

[110].   Ibid.

[111].   Neil Gandal et al. “Price manipulation in the Bitcoin ecosystem,” Journal of Monetary Economics, May 2018, https://www.sciencedirect.com/science/article/abs/pii/S0304393217301666.

[112].   Soroush Vosoughi, Deb Roy, and Sinan Aral, “The spread of true and false news online,” Science, vol. 359, iss. 6380 (March 9, 2018): 1146–1151. DOI: 10.1126/science.aap9559.

[113].   James Vincent, “Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news,” The Verge, April 17, 2018, https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed.

[114].   “DeepFaceLab,” GitHub, https://github.com/iperov/DeepFaceLab.

[115].   Mike Schroepfer, “Creating a data set and a challenge for deepfakes,” Facebook Artificial Intelligence, September 5, 2019, https://ai.facebook.com/blog/deepfake-detection-challenge/.

[116].   Matthew Yglesias, “Video Games Don’t Cause Violent Crime,” Vox, August 2019, accessed September 16, 2019, https://www.vox.com/2019/8/5/20754769/trump-video-games-mass-shooting-el-paso-toledo.

[117].   Aja Romano, “The Frustrating, Enduring Debate over Video Games, Violence, and Guns,” Vox, August 26, 2019, accessed September 16, 2019, https://www.vox.com/2019/8/26/20754659/video-games-and-violence-debate-moral-panic-history.

[118].   Austen Goslin, “ESPN Delays Apex Legends Tournament Broadcast After Mass Shootings,” Polygon, August 9, 2019, accessed September 16, 2019, https://www.polygon.com/2019/8/9/20798452/espn-apex-legends-tournament-broadcast-rescheduled; Amy Russo, “Walmart Removes Violent Video Game Displays After Shootings, Still Sells Guns,” HuffPost, August 9, 2019, accessed September 16, 2019, https://www.huffpost.com/entry/walmart-removes-violent-video-game-displays-after-shootings-el-paso-dayton_n_5d4d7a11e4b01e44e47947dc.

[119].   The Pessimists Archive, https://pessimists.co/comic-books/

[120].   Faltin Karlsen, “Analyzing the History of Game Controversies,” Proceedings of the 2014 DiGRA International Conference, August 2014, available on DiGRA, http://www.digra.org/digital-library/publications/analysing-the-history-of-game-controversies/.

[121].   Patrick Markey, Moral Combat: Why the War on Violent Video Games is Wrong (BenBella Books: Dallas, March 2017).

[122].   Kat Eschner, “How ‘Mortal Kombat’ Changed Video Games,” Smithsonian, September 13, 2017, accessed September 16, 2019, https://www.smithsonianmag.com/smart-news/how-mortal-kombat-changed-video-games-180964835/.

[123].   Scott Cunningham, Benjamin Engelstatter, and Michael Ward, “Understanding the Effects of Violent Video Games on Violent Crime,” Center for European Economic Research, April 9, 2011, available on SSRN, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1804959.

[124].   Gordon Dahl and Stefano DellaVigna, “Does Movie Violence Increased Violent Crime?” The Quarterly Journal of Economics, May 2009, 677-734, https://eml.berkeley.edu/~sdellavi/wp/moviescrimeQJEProofs2009.pdf.

[125] Chris Ferguson et al., “News Media, Public Education and Public Policy Committee,” The Amplifier Magazine, June 12,2017, accessed September 16, 2019, https://div46amplifier.com/2017/06/12/news-media-public-education-and-public-policy-committee/.

[126].   Yglesias, “Video Games Don’t Cause Violent Crime.”

[127].   Elizabeth Grieco, “U.S. Newsroom Employment has Dropped By a Quarter Since 2008, With Greatest Decline at Newspapers,” Pew Research Center, July 9, 2019, accessed September 16, 2019, https://www.pewresearch.org/fact-tank/2019/07/09/u-s-newsroom-employment-has-dropped-by-a-quarter-since-2008/.

[128].   Gerry Smith, “Journalism Job Cuts Haven’t Been This Bad Since the Recession,” Bloomberg, July 1, 2019, accessed September 16, 2019, https://www.bloomberg.com/amp/news/articles/2019-07-01/journalism-layoffs-are-at-the-highest-level-since-last-recession.

[129].   “In News Industry, a Stark Divide Between Haves and Have-Nots,” The Wall Street Journal, accessed September 16, 2019, https://www.wsj.com/graphics/local-newspapers-stark-divide/.

[130].   “Save Journalism Project,” Saving Journalism Project, accessed September 16, 2019, https://savejournalism.org/

[131].   “Google benefit from news Content” (News Media Alliance, June 2019), accessed September 2016, 2019, http://www.newsmediaalliance.org/wp-content/uploads/2019/06/Google-Benefit-from-News-Content.pdf; Joshua Benton, “That ‘$4.7 billion’ Number for How Much Money Google Makes off the News Industry? It’s Imaginary,” Nieman Lab, June 10, 2019, accessed September 16, 2019, https://www.niemanlab.org/2019/06/that-4-7-billion-number-for-how-much-money-google-makes-off-the-news-industry-its-imaginary/.

[132].   Bernie Sanders, “Bernie Sanders on His Plan for Journalism,” Columbia Journalism Review, August 26, 2019, accessed September 16, 2019, https://www.cjr.org/opinion/bernie-sanders-media-silicon-valley.php.

[133].   Alicia Shepard, “Craig Newmark and Craigslist Didn’t Destroy Newspapers, They Outsmarted Them,” USA Today, June 17, 2018, accessed September 16, 2019, https://www.usatoday.com/story/opinion/2018/06/18/craig-newmark-craigslist-didnt-kill-newspapers-outsmarted-them-column/702590002/.

[134].   Russell Adams, “Papers Put Faith in Paywalls,” The Wall Street Journal, March 4, 2012, accessed September 16, 2019, https://www.wsj.com/articles/SB10001424052970203833004577251822631536422.

[135].   Anna Solana, “The Google News Effect: Spain Reveals the Winners and Losers from a 'Link Tax',” ZDNet, August 14, 2019, accessed September 16, 2019, https://www.zdnet.com/article/the-google-news-effect-spain-reveals-the-winners-and-losers-from-a-link-tax/.

[136].   Vlad Savov, “Google News Quits Spain in Response to New Law,” The Verge, December 11, 2014, accessed November 16, 2019, https://www.theverge.com/2014/12/11/7375733/google-news-spain-shutdown.

[137].   Susan Athey, Markus Mobius, and Jeno Pal, “The Impact of Aggregators on Internet News Consumption,” Stanford University Graduate School of Business, Research Paper no. 17-8, (January, 2017). https://ssrn.com/abstract=2897960.

[138].   Pedro Posada de la Concha et al., “Impact on Competition and on Free Market of the Google Tax or AEDE Fee,” NERA Economic Consulting, reporting for the Spanish Association of Publishers of Periodical Publishers, 2017, https://www.aeepp.com/pdf/Informe_NERA_para_AEEPP_(INGLES).pdf.

[139].   Eric Schmidt, “Google creates €60m Digital Publishing Innovation Fund to support transformative French digital publishing initiatives,” Google, February 1, 2013, accessed September 16, 2019, https://googleblog.blogspot.com/2013/02/google-creates-60m-digital-publishing.html.

[140].   Alberto Ibarguen, “Knight Foundation’s Investment in Media Seeks to Preserve an Essential Element of Democracy: An Informed Citizenry,” Knight Foundation, February 21, 2019, https://knightfoundation.org/articles/local-news-initiative.

[141].   Albert Fox Cahn, “Smart Cities Are Creating a Mass Surveillance Nightmare,” Daily Beast, October 1, 2019, https://www.thedailybeast.com/smart-cities-are-creating-a-mass-surveillance-nightmare.

[142].   Clare Garvie, Alvaro Bedoya, Jonathan Frankle, “The Perpetual Line Up,” October 18, 2016, https://www.perpetuallineup.org/.

[143].   Chris Jay Hoofnagle, “Big Brother's Little Helpers: How Choicepoint and Other Commercial Data Brokers Collect, Process, and Package Your Data for Law Enforcement,” February 27, 2014, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=582302.

[144].   Ewen Macaskill and Gabriel Dance, “NSA Files Decoded,” The Guardian, November 1, 2013, https://www.theguardian.com/world/interactive/2013/nov/01/snowden-nsa-files-surveillance-revelations-decoded.

[145].   United States v. Jones, 565 U.S. 400 (2012) and Carpenter v. United States, No. 16-402, 585 U.S. (2018).

[146].   Daniel Castro and Alan McQuinn, “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies” (Information Technology and Innovation Foundation, September 2015), http://www2.itif.org/2015-privacy-panic.pdf.

[147].   Corynnne Mcsherry, “An Attack on Net Neutrality Is an Attack on Free Speech” EFF (June 2017), https://www.eff.org/deeplinks/2017/06/attack-net-neutrality-attack-free-speech.

[148].   “Net Neutrality: What You Need to Know Now,” Free Press, news release, October 2019,

https://www.freepress.net/issues/free-open-internet/net-neutrality/net-neutrality-what-you-need-know-now.

[149].   “The Internet Without Net Neutrality Isn’t Really the Internet,” Free Press, accessed October 2019. https://www.freepress.net/issues/free-open-internet/net-neutrality.

[150].   Klint Finley, “Here's How the End of Net Neutrality Will Change the Internet,” Wired (Nov. 2017), https://www.wired.com/story/heres-how-the-end-of-net-neutrality-will-change-the-internet/.

[151].   Free Press supra.

[152].   Ibid.

[153].   The Madison River case was easily resolved through consent decree in 2005, despite the lack of net neutrality rules. Madison River Communications, LLC and affiliated companies, File No. EB-05-IH-0110, Consent Decree, https://apps.fcc.gov/edocs_public/attachmatch/DA-05-543A2.pdf.

[154].   SeeDoug Brake, “ITIF Comments on Protecting and Promoting an Open Internet” GN Docket 14-28 (July 2014), at 17, http://www2.itif.org/2014-comments-fcc-open-internet.pdf.

[155].   See for example Comcast Corp. v. FCC 600 F.3d 642 (D.C. Cir.) (2010). See Jim Gettys, “Bufferbloat and network neutrality –back to the past...,” jg’s Ramblings, http://gettys.wordpress.com/2010/12/07/bufferbloat-and-network-neutrality-back-to-the-past/; Jim’s point that “we should not set public policy going forward without understanding what may actually have happened, rather than a possibly flawed understanding of technical problems” is a good one.

[156].   “Comcast Ruling: Now What?,” ITIF, June 1, 2010, https://itif.org/events/2010/06/01/comcast-ruling-now-what.

[157].   The BitTorrent protocol has added some interesting congestion control mechanisms since its days of worst offense. For discussion, see Dario Rossi, et al., “Ledbat: the new BitTorrent congestion control protocol,” Telecom ParisTech (Aug. 2010), http://perso.telecom-paristech.fr/~drossi/paper/rossi10icccn.pdf.

[158].   Keith Collins, “Net Neutrality Has Officially Been Repealed. Here’s How That Could Affect You.” The New York Times (June 11, 2018), https://www.nytimes.com/2018/06/11/technology/net-neutrality-repeal.html.

[159].   Doug Brake, “Why We Need Net Neutrality Legislation, and What It Should Look Like” ITIF (May 2018), https://itif.org/publications/2018/05/07/why-we-need-net-neutrality-legislation-and-what-it-should-look.

[160].   See Doug Brake, “Paid Prioritization: Why We Should Stop Worrying and Enjoy the ‘Fast Lane’” ITIF (July 2018), http://www2.itif.org/2018-paid-prioritization.pdf.

[162].   Cecilia Kang and Sheera Frenkel, “Republicans Accuse Twitter of Bias Against Conservatives,”https://www.nytimes.com/2018/09/05/technology/lawmakers-facebook-twitter-foreign-influence-hearing.html.

[163].   Emil Pitkin, “Alphabet’s Political Contributions,” GovPredict, September 6, 2018, https://govpredict.com/blog/alphabets-political-contributions/.

[164].   Joshua New, “Pretending Algorithms Have an Anti-Conservative Bias is Dangerous” (Center for Data Innovation, September 7, 2018), https://www.datainnovation.org/2018/09/pretending-algorithms-have-an-anti-conservative-bias-is-dangerous/.

[165].   Laura Jacobson, “No, 96% of Google news stories on Trump aren't from left-wing outlets,” PolitiFact, August 29, 2018, https://www.politifact.com/truth-o-meter/statements/2018/aug/29/donald-trump/no-96-google-news-stories-trump-arent-left-wing-ou/;

Bryan Clark, “No, Twitter isn’t ‘shadow banning’ conservative voices. Here’s what’s really going on.,” The Next Web, July 26, 2018, https://thenextweb.com/insider/2018/07/27/no-twitter-isnt-shadow-banning-conservative-voices-heres-whats-really-going-on/;

Kim Lacapria, “Is Facebook Censoring Conservative News?” Snopes, May 9, 2016, https://www.snopes.com/fact-check/is-facebook-censoring-conservative-news/.

[166].   Donald J. Trump, Twitter Post, August 19, 2019, 8:52 AM, https://twitter.com/realDonaldTrump/status/1163478770587721729.

[167].   The Office of Senator Ted Cruz, “Sen. Cruz: The Pattern of Political Bias From YouTube and Google is Massive,” news release, July 18, 2019, https://www.cruz.senate.gov/?p=press_release&id=4591;

April Glaser, “2.6 Million Reasons to Keep Yelling About ‘Bias,’” Slate, August, 20, 2019, https://slate.com/technology/2019/08/robert-epstein-google-bias-conservative-bogus-trump.html.

[168].   Ibid.

[169].   Joshua New, “No, Algorithms Do Not Hijack Elections,” (Center for Data Innovation, September 22, 2015), https://www.datainnovation.org/2015/09/no-algorithms-do-not-hijack-elections/.

[170].   April Glaser, “2.6 Million Reasons to Keep Yelling About ‘Bias.’”

[171].   Pete Baklinski, “YouTube banned this powerful pro-life music video. Then the artist sued.,” LifeSiteNews.com, December 18, 2015, https://www.lifesitenews.com/news/youtube-banned-this-powerful-pro-life-music-video.-then-the-artist-sued; Mark Hodges, “Musician performs pro-life song banned on YouTube at March for Life prayer service,” LifeSiteNews.com, January 23, 2019, https://www.priestsforlife.org/clippings/7647-musician-performs-pro-life-song-banned-on-youtube-at-march-for-life-prayer-service.

[172].   “California Court Holds That YouTube’s Removal Notice Is Not Defamatory,” Morrison & Foerster LLP, February 22, 2018, https://www.lexology.com/library/detail.aspx?g=92292762-ed93-42fb-bf8f-c4e05bbb1a37.

[173].   Joe Perticone, “Trump vloggers Diamond & Silk are sticking to their debunked claim about Facebook censorship,” Business Insider, April 25, 2018, https://www.businessinsider.com/diamond-and-silk-facebook-censorship-testimony-2018-4.

[174].   Ibid.

[175].   Alayna Treene, “Poll: Most conservatives think social media is censoring them,” Axios, August 29, 2019, https://www.axios.com/conservatives-social-media-censorship-poll-3a966ebb-6b44-458f-8941-40fc015a86a6.html.

[176].   Jane Coaston, “Why some conservatives want to regulate Facebook and Twitter,” Vox, September 5, 2018, https://www.vox.com/2018/9/5/17820022/conservatives-right-twitter-facebook-bias-allegations.

[177].   Ajit Pai, “What I Hope to Learn from the Tech Giants,” Medium, September 4, 2018, https://medium.com/@AjitPaiFCC/what-i-hope-to-learn-from-the-tech-giants-6f35ce69dcd9.

[178].   Nancy Scola, “Sessions throws DOJ’s weight into social media bias complaints,” Politico, September 5, 2018, https://www.politico.com/story/2018/09/05/sessions-social-media-bias-complaints-770449.

[179].   Facebook Newsroom, “Senator Jon Kyl, ‘Covington Interim Report,’” (August 2019), https://fbnewsroomus.files.wordpress.com/2019/08/covington-interim-report-1.pdf.

[180].   Marie C. Baca, “The social media giant has faced repeated criticism from conservatives that it is biased against them,” The Washington Post, August 20, 2019, https://www.washingtonpost.com/technology/2019/08/20/facebook-makes-small-tweaks-following-anti-conservative-bias-report-theyre-unlikely-make-issue-go-away/?arc404=true.

[181].   Lauren Feiner, “Facebook details rules for its new ‘Supreme Court’ that will handle controversial posts,” CNBC, September 17, 2019, https://www.cnbc.com/2019/09/17/facebook-details-plans-for-new-oversight-board.html.

[182].   David Auerbach, “The Code We Can’t Control,” Slate, January 14, 2015, https://slate.com/technology/2015/01/black-box-society-by-frank-pasquale-a-chilling-vision-of-how-big-data-has-invaded-our-lives.html; Karen Hao, “Making face recognition less biased doesn’t make it less scary,” MIT Technology Review, January, 29, 2019, https://www.technologyreview.com/s/612846/making-face-recognition-less-biased-doesnt-make-it-less-scary/; Karen Hao, “AI is sending people to jail—and getting it wrong,” MIT Technology Review, January 21, 2019, https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/.

[183].   Mark Sullivan, “Here’s AOC calling out the vicious circle of white men building biased face AI,” Fast Company, May 22, 2019, https://www.fastcompany.com/90354348/alexandria-ocasio-cortez-congress-face-recognition-hearing-ai-bias; Dave Gershgorn, “Congress is worried about AI bias and diversity,” Quartz, February 15, 2018, https://qz.com/1208581/diversity-and-bias-in-ai-has-reached-us-congress/; Algorithm Accountability Act of 2019, S.1108, 116 Cong. (2019).

[184].   Larry Hardesty, “Study finds gender and skin-type bias in commercial artificial-intelligence systems,” MIT News, February 11, 2018, http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212.

[185].   Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women,’ Reuters, October 9, 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.

[186].   Karen Hao, “This is how AI bias really happens—and why it’s so hard to fix,” MIT Technology Review, February 4, 2019, https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

[187].   Ibid.

[188].   Information Technology and Innovation Foundation, “ACLU Claims About Facial Recognition Are Misleading, Says Leading Tech Policy Think Tank.” news release, August 14, 2019, https://itif.org/publications/2019/08/14/aclu-claims-about-facial-recognition-are-misleading-says-leading-tech-policy.

[189].   Jordan Weissman, “Amazon Created a Hiring Tool Using A.I. It Immediately Started Discriminating Against Women,” Slate, October 10, 2018, https://slate.com/business/2018/10/amazon-artificial-intelligence-hiring-discrimination-women.html.

[190].   Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women.”

[191].   Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic Accountability,” (Center for Data Innovation, May 21, 2018), http://www2.datainnovation.org/2018-algorithmic-accountability.pdf.

[192].   Ibid.

[193].   Lee Raine and Janna Anderson, “Code-Dependent: Pros and Cons of the Algorithm Age,” (Pew Research Center, February 2017), https://www.pewinternet.org/2017/02/08/code-dependent-pros-and-cons-of-the-algorithm-age/.

[194].   Joshua New and Daniel Castro, “How Policymakers Can Foster Algorithmic Accountability” (Center for Data Innovation, May 2018), http://www2.datainnovation.org/2018-algorithmic-accountability.pdf.

[195].   Steve Lohr, “Facial Recognition Is Accurate, if You’re a White Guy,” The New York Times, February 9, 2018, https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

[196].   Daniel Castro and Joshua New, “Comments to OMB on Federal Data and Models for AI R&D,” Center for Data Innovation, August 9, 2019, https://s3.amazonaws.com/www2.datainnovation.org/2019-omb-federal-data-models-rfi.pdf.

[197].   “Plato’s Argument Against Writing,” Farnam Street, February 2013, https://fs.blog/2013/02/an-old-argument-against-writing/.

[198].   Francesca Frawley, “Google Maps - the secret key to helping Alzheimer’s patients remember?” The Express, July 6, 2016, https://www.express.co.uk/life-style/health/686723/Google-Maps-Dementia-Alzheimer-s-breakthrough-memory-care.

[199].   Terrance J. Sejnowski, AI Will Make You Smarter, John Brockman (Ed.), What to Think About Machines That Think, 113, Harper Perennial.

[200].   Ibid, 332.

[201].   Ibid, 114.

[202].   Richard H. Thaler, Who’s Afraid of Artificial Intelligence, John Brockman (Ed.), What to Think About Machines That Think, 487, Harper Perennial.

[203].   Jenna Sargent, “There’s a Diversity Problem in the Tech Industry and It’s Not Getting Any Better,” San Diego Times, June 5, 2019, https://sdtimes.com/softwaredev/theres-a-diversity-problem-in-the-tech-industry-and-its-not-getting-any-better/; Lori Ioannou, “Silicon Valley’s Achilles’ Heel Threatens to Topple Its Supremacy in Innovation,” CNBC, June 20, 2018, https://www.cnbc.com/2018/06/20/silicon-valleys-diversity-problem-is-its-achilles-heel.html.

[204].   Cat Zakrzewski, “The Technology 202: Ex-Facebook Manager Says It’s ‘Absolutely Necessary’ Congress Scrutinize Big Tech’s Diversity Problem,” The Washington Post, March 6, 2019, https://www.washingtonpost.com/news/powerpost/paloma/the-technology-202/2019/03/06/the-technology-202-ex-facebook-manager-says-it-s-absolutely-necessary-congress-scrutinize-big-tech-s-diversity-problem/5c7ec0c51b326b2d177d5fd4/.

[205].   National Science Foundation, “Science and Engineering Indicators 2018,” Appendix Table 2-21, accessed September 17, 2019, https://nsf.gov/statistics/2018/nsb20181/assets/561/tables/at02-21.pdf.

[206].   National Science Foundation, “Science and Engineering Indicators 2018,” Appendix Table 2-27, accessed September 17, 2019, https://nsf.gov/statistics/2018/nsb20181/assets/561/tables/at02-27.pdf; National Science Foundation, “Science and Engineering Indicators 2018,” Appendix Table 2-29, accessed September 17, 2019, https://nsf.gov/statistics/2018/nsb20181/assets/561/tables/at02-29.pdf. 

[207].   National Science Foundation, “Science and Engineering Indicators 2018,” Appendix Table 2-22, accessed September 17, 2019, https://nsf.gov/statistics/2018/nsb20181/assets/561/tables/at02-22.pdf.

[208].   U.S. Bureau of Labor Statistics, “Labor Force Statistics from the Current Population Survey,” Employed Persons by Detailed Occupation, Sex, Race, and Hispanic or Latino ethnicity,” accessed September 17, 2019, https://www.bls.gov/cps/cpsaat11.htm.

[209].   U.S. Bureau of Labor Statistics, “Labor Force Statistics from the Current Population Survey,” Employed Persons by Detailed Occupation, Sex, Race, and Hispanic or Latino ethnicity,” accessed September 17, 2019, https://www.bls.gov/cps/cpsaat11.htm.

[210].   “Inclusion and Diversity,” Apple, accessed September 17, 2019, https://www.apple.com/diversity/; “2019 Diversity Report,” Facebook, accessed September 17, 2019, https://diversity.fb.com/read-report/; “Google Annual Diversity Report 2019,” Google, accessed September 17, 2019, https://diversity.google/annual-report/#!#_this-years-data.

[211].   “Inclusion and Diversity,” Apple, accessed September 17, 2019, https://www.apple.com/diversity/.

[212].   “2019 Diversity Report,” Facebook, accessed September 17, 2019, https://diversity.fb.com/read-report/.

[213].   “Inclusion and Diversity,” Apple, accessed September 17, 2019, https://www.apple.com/diversity/; “2019 Diversity Report,” Facebook, accessed September 17, 2019, https://diversity.fb.com/read-report/; “Google Annual Diversity Report 2019,” Google, accessed September 17, 2019, https://diversity.google/annual-report/#!#_this-years-data.

[214].   Adams Nager and Robert D. Atkinson, “The Case for Improving U.S. Computer Science Education” (Information Technology and Innovation Foundation, May 2016), https://itif.org/publications/2016/05/31/case-improving-us-computer-science-education.

[215].   Rani Molla, “High School Students Are More Likely to Take AP Computer Science If They Live in Maryland or Rhode Island,” Vox, November 28, 2017, https://www.vox.com/2017/11/28/16263166/maryland-rhode-montana-island-high-school-computer-science-stem-college; Adams Nager and Robert D. Atkinson, “The Case for Improving U.S. Computer Science Education” (Information Technology and Innovation Foundation, May 2016), https://itif.org/publications/2016/05/31/case-improving-us-computer-science-education; “Computer Science in California’s Schools: An Analysis of Access, Enrollment, and Equity” (Kapor Center, 2019), https://www.kaporcenter.org/wp-content/uploads/2019/06/Computer-Science-in-California-Schools.pdf.

[216].   “More Students Than Ever Are Participating And Succeeding In Advanced Placement,” College Board, February 21, 2018, https://www.collegeboard.org/releases/2018/more-students-than-ever-are-participating-and-succeeding-in-advanced-placement.

[217].   Jeremy Goldman, “Why It's Getting Harder, Not Easier, to Find Women with Computer Science Degrees,” Inc, November 27, 2016, https://inc.com/jeremy-goldman/why-its-getting-harder-not-easier-to-find-women-with-computer-science-degrees.html.

[218].   Allan Fisher and Jane Margolis, “Unlocking the Clubhouse: the Carnegie Mellon Experience,” Women and Computing 34, no. 2 (June 2002): 79–83.

[219].   “Who We Are,” Catalyte, accessed September 17, 2019, https://catalyte.io/about-us/.

[220].   Anders Andrae, “Total Consumer Power Consumption Forecast,” Nordic Digital Business Summit, October 5, 2017, https://www.researchgate.net/publication/320225452_Total_Consumer_Power_Consumption_Forecast.

[221].   Camilo Mora et al., “Bitcoin emissions alone could push global warming above 2°C,” Nature Climate Change 8 931–933 (October 29, 2018), https://www.nature.com/articles/s41558-018-0321-8.

[222].   Emma Strubell, Ananya Ganesh, and Andrew McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” 57th Annual Meeting of the Association for Computational Linguistics (July 2019), https://arxiv.org/abs/1906.02243.

[223].   Ben Tarnoff, “To decarbonize we must decomputerize: why we need a Luddite revolution,” The Guardian, September 18, 2019, https://www.theguardian.com/technology/2019/sep/17/tech-climate-change-luddites-data.

[224].   Peter Huber and Mark Mills, “Dig More Coal--The PCs Are Coming,” Forbes (May 31, 1999), https://www.forbes.com/forbes/1999/0531/6311070a.html.

[225].   Arman Shehabi et al., United States Data Center Energy Usage Report (Lawrence Berkeley National Laboratory, 2016) https://eta.lbl.gov/publications/united-states-data-center-energy.

[226].   International Energy Agency (IEA), “Tracking Clean Energy Progress,” https://www.iea.org/tcep/, accessed September 11, 2019.

[227].   Efficiency at peak output has doubled at a slower rate, about every 2.7 years since 2000. But most computers only run a small fraction of the time at peak output—about 1 percent for mobile devices and laptops, and 10 percent for enterprise data servers. The “typical-use” efficiency considers average efficiency across a year and is more appropriate for devices. Jonathan Koomey and Samuel Naffziger, “Moore’s Law Might Be Slowing Down, But Not Energy Efficiency,” IEEE Spectrum (March 31, 2015), https://spectrum.ieee.org/computing/hardware/moores-law-might-be-slowing-down-but-not-energy-efficiency, accessed September 11, 2019.

[228].   George Kamiya, “Commentary: Bitcoin energy use—mined the gap” (IEA, July 5, 2019) https://www.iea.org/newsroom/news/2019/july/bitcoin-energy-use-mined-the-gap.html, accessed September 11, 2019.

[229].   “Syntiant Always-On Speech & Audio Recognition Processors,” Syntiant Corp., accessed October 22, 2019, https://www.syntiant.com/ndp100.

[230].   Global Workplace Analytics, “2017 State of Telecommuting in the U.S. Employee Workforce” (Global Workplace Analytics and Flexjobs, 2017) https://www.flexjobs.com/2017-State-of-Telecommuting-US.

[231].   Arman Shehabi, Ben Walker, and Eric Masanet, “The energy and greenhouse-gas implications of internet video streaming in the United States,” Environmental Research Letters Vol 9 No 5 (IOP Publishing, May 28, 2014) https://doi.org/10.1088/1748-9326/9/5/054007.

[232].   Dimitri Weideli, “Environmental Analysis of US Online Shopping” (MIT Center for Transportation & Logistics, 2013) https://ctl.mit.edu/pub/thesis/environmental-analysis-us-online-shopping.

[233].   Robert Walton, “Washington regulators approve Microsoft deal to buy clean energy on open power markets,” Utility Dive (July 19, 2017) , accessed September 11, 2019, https://www.utilitydive.com/news/washington-regulators-approve-microsoft-deal-to-buy-clean-energy-on-open-po/447406/.

[234].   Much of Google’s goal was met with renewable energy credits. It purchased credits for renewable electricity to offset electricity supplied in real time by fossil fuels. So Google launched a new goal to supply 100 percent of their real-time energy needs with carbon-free energy. Neha Palmer, “100 percent renewable energy, for the second year in a row” (Google, June 5, 2019), accessed September 11, 2019, https://www.blog.google/outreach-initiatives/sustainability/100-percent-renewable-energy-second-year-row/.

[235].   “About Us,” The Green Grid, accessed October 22, 2019, https://www.thegreengrid.org/en/about-us/members.

[236].   IEA,” Tracking Clean Energy Progress: Data centres and data transmission networks,” accessed September 11, 2019, https://www.iea.org/tcep/buildings/datacentres/.

[237].   International Energy Association (IEA), Digitalization and Energy 2017, (IEA, November 2017) 107, https://www.iea.org/digital/.

[238].   Colin Cunliff and David Hart, “Global Energy Innovation Index: National Contributions to the Global Clean Energy System” (Information Technology and Innovation Foundation, August 2019) https://itif.org/publications/2019/08/26/global-energy-innovation-index-national-contributions-global-clean-energy.

[239].   ENERGY STAR, “Buildings & Plants: ENERGY STAR Score for Data Centers,” accessed September 20, 2019, https://www.energystar.gov/buildings/tools-and-resources/energy-star-score-data-centers..

[240].   European Commission, An Industrial Strategy for all of Europe, February 6, 2019, https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/industrial-strategy-all-europe_en.

[241].   Sam Schechner, “Silicon Valley Hit With New Digital Tax in France,” The Wall street Journal, March 6, 2019, https://www.wsj.com/articles/silicon-valley-hit-with-new-digital-tax-in-france-11551869144?mod=article_inline.

[242].   “Sanders Claims Amazon Didn’t Pay Federal Income Tax,” CNN, https://www.cnn.com/videos/us/2019/07/31/bernie-sanders-amazon-federal-income-taxes-democratic-debate-sot-vpx.cnn.

[243].   European Commission, A Fair and Efficient Tax System in the European Union for the Digital Single Market, Communication from the Commission to the European Parliament and the Council, Brussels, COM(2017) 547, September 21, 2017, 6, https://ec.europa.eu/taxation_customs/sites/taxation/files/communication_taxation_digital_single_market_en.pdf.

[244].   European Commission, Regulatory Scrutiny Board, Opinion: Impact Assessment/Fair Taxation of Digital Economy, Brussels, SEC (2018), January 24, 2018, 1, https://ec.europa.eu/transparency/regdoc/rep/2/2018/EN/SEC-2018-162-F1-EN-MAIN-PART-1.PDF.

[245].   Matthias Bauer, “Corporate Tax Out of Control: EU Tax Protectionism and the Digital Services Tax” (European Policy Information Center and European Center for International Political Economy, February 2019), 9, https://ecipe.org/wp-content/uploads/2019/02/Corporate-Tax-Out-of-Control.pdf.

[246].   Helge Sigurd Næss-Schmidt et al., “The Proposed EU Digital Services, Tax: Effects on Welfare, Growth and Revenues” (Copenhagen Economics, September 2018), https://www.copenhageneconomics.com/dyn/resources/Publication/publicationPDF/7/457/1537162175/copenhagen-economics-study-on-the-eu-dst-proposal-13-september.pdf.

[247].   Laura Davison, “U.S. Companies Flee No-Tax Caribbean Havens After Eu Crackdown,” Bloomberg, November 15, 2018, https://www.bloomberg.com/news/articles/2018-11-15/corporate-america-flees-zero-tax-caribbean-havens-post-crackdown.

[248].   International Monetary Fund, “Corporate Taxation in the Global Economy,” IMF Policy Paper, March 2019, 11–12, https://www.imf.org/en/Publications/Policy-Papers/Issues/2019/03/08/Corporate-Taxation-in-the-Global-Economy-46650.

[249].   Joe Kennedy, “Digital Services Taxes: A Bad Idea Whose Time Should Never Come,” (Information Technology and Innovation Foundation, May 2019), https://itif.org/publications/2019/05/13/digital-services-taxes-bad-idea-whose-time-should-never-come.

[250].   Carl Benedikt Frey and Michael A. Osborne, “The Future of Employment: How Susceptible are Jobs to Computerization” (paper authored for the Oxford Martin Programme on the Impact of Future Technology’s “Machines and Employment” workshop, 2013), https://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf.

[251].   George Dvorsky, “Emerging Tech Will Create More Jobs Than It Kills by 2022, World Economic Forum Predicts,” Gizmodo, September 17, 2018, https://gizmodo.com/emerging-tech-will-create-more-jobs-than-it-kills-by-20-1829111519.

[252].   Chis Tomlinson, “Automation could hollow out the American workforce,” Houston Chronicle¸ January 21, 2019, https://www.houstonchronicle.com/business/columnists/tomlinson/article/Automation-could-hollow-out-the-American-workforce-13543295.php.

[253].   David Beier and Robert Atkinson, “A Tax on Robots is a Tax on Jobs,” Inside Sources, November 9, 2017, https://www.insidesources.com/tax-robots-tax-jobs/.

[254].   Bill de Blasio, “Why American Workers Need to Be Protected From Automation,” Wired, September 5, 2019, https://www.wired.com/story/why-american-workers-need-to-be-protected-from-automation/.

[255].   For several of the numerous literature surveys, see Dedrick, Gurbazani, and Kraemer, “Information Technology and Economic Performance,” 12; Mirko Draca, Raffaella Sadun, and John Van Reenen, “Productivity and ICT: A Review of the Evidence” (discussion paper no. 749, Centre for Economic Performance, August 2006), accessed April 11, 2016, http://eprints.lse.ac.uk/4561/; Tobias Kretschmer, “Information and Communication Technologies and Productivity Growth: A Survey of the Literature” Information Technology and Innovation Foundation, May 2016: pp. 107, OECD Digital Economy Papers no. 195, 2012), accessed April 11, 2016, http://dx.doi.org/10.1787/5k9bh3jllgs7-en; M. Cardona, T. Kretschmer, and T. Strobel, “ICT and Productivity: Conclusions from the Empirical Literature,” Information Economics and Policy 25, no. 3 (September 2013): 109–25, doi:10.1016/j.infoecopol.2012.12.002.

[256].   Stephen Rose, “Was JFK Wrong? Does Rising Productivity No Longer Lead to Substantial Middle Class Income Gains?” (Information Technology and Innovation Foundation, December 16, 2014), https://itif.org/publications/2014/12/16/was-jfk-wrong-does-rising-productivity-no-longer-lead-substantial-middle.

[257].   Robert D. Atkinson, “‘It’s Going to Kill Us!’ And Other Myths About the Future of Artificial Intelligence” (Information Technology and Innovation Foundation, June 2016), http://www2.itif.org/2016-myths-machine-learning.pdf; Erin Winick, “Only 14 percent of the world has to worry about robots taking their jobs (... yay?),” MIT Technology Review, April 2, 2018, https://www.technologyreview.com/f/610740/only-14-percent-of-the-world-has-to-worry-about-robots-taking-their-jobs-yay/; “Will AI Destroy More Jobs Than It Creates Over the Next Decade?” The Wall Street Journal, April 1, 2019, https://www.wsj.com/articles/will-ai-destroy-more-jobs-than-it-creates-over-the-next-decade-11554156299; Robert D. Atkinson, “5 Myths About The Future Of Artificial Intelligence,” The Huffington Post, July 7, 2017, https://www.huffingtonpost.com/robert-d-atkinson-phd/5-myths-about-the-future-_b_10819602.html; Robert D. Atkinson and John Wu, “False Alarmism: Technological Disruption and the U.S. Labor Market, 1850–2015” (Information Technology and Innovation Foundation, May 8, 2017), https://itif.org/publications/2017/05/08/false-alarmism-technological-disruption-and-us-labor-market-1850-2015.

[259].   Organization for Economic Cooperation and Development (OECD), The OECD Jobs Strategy, Technology, Productivity, and Job Creation: Best Policy Practices, http://www.oecd.org/industry/ind/2759012.pdf.

[260].   Michael Chui, James Manyika, and Mehdi Miremadi, “Four fundamentals of workplace automation,” McKinsey Digital, November 2015, https://www.mckinsey.com/business-functions/mckinsey-digital/our-insights/four-fundamentals-of-workplace-automation.

[261].   Wolfgang Dauth, et al., “Adjusting to Robots: Worker-Level Evidence” (working paper, Opportunity & Inclusive Growth Institute, 2018); Terry Gregory, Anna Salomons, and Ulrich Zierahn, “Racing With or Against the Machine? Evidence from Europe,” (discussion paper no 16-053, ZEW, 2016);  Joerg Mayer, “Robots and Industrialization: What Policies for Inclusive Growth?” (working paper, Group 24 and Friedrich-Ebert-Stiftung, New York, 2018), https://www.g24.org/wp-content/uploads/2018/08/Mayer_-_Robots_and_industrialization.pdf.

[262].   Joe Kennedy, “No, Automation Is Not Causing a Decline in Workers’ Share of Income” (Information Technology and Innovation Foundation, July 22, 2019), https://itif.org/publications/2019/07/22/no-automation-not-causing-decline-workers-share-income.

[263].   Geoff Colvin, “How Automation Is Cutting Into Workers’ Share of Economic Output,” Fortune, July 8, 2019, https://fortune.com/2019/07/08/automation-depressed-wages/.

[264].   Liam Kennedy, “The Technology Trap: Capital, Labour and Power in the Age of Automation – Book Review,” LSE Business Review, September 15, 2019, https://blogs.lse.ac.uk/businessreview/2019/09/15/the-technology-trap-capital-labour-and-power-in-the-age-of-automation-book-review/.

[265].   Josh Bivens and Lawrence Mishel, “Understanding the Historic Divergence Between Productivity and a Typical Worker’s Pay: Why It Matters and Why It’s Real” (Economic Policy Institute, September 2, 2015), https://www.epi.org/publication/understanding-the-historic-divergence-between-productivity-and-a-typical-workers-pay-why-it-matters-and-why-its-real/.

[266].   U.S. Bureau of Labor Statistics.

[267].   Graetz, Georg, and Guy Michaels, 2015, Robots at Work, Centre for Economic Performance.

[268].   Jonathan Rothwell, email exchange with Robert Atkinson, based on his analysis using the 2013 American Community Survey (via IPUMS-USA), May 2016.

[269].   Steven N. Kaplan and Joshua Rauh, “Wall Street and Main Street: What Contributes to the Rise in the Highest Incomes?” Review of Financial Studies 23, no. 3 (2010): 1004–1050.

[270].   Robert D. Atkinson, “Don’t Believe the ‘Monopoly’ Hype,” (Information Technology and Innovation Foundation, December 1, 2018), https://itif.org/publications/2018/12/01/dont-believe-monopoly-hype; Joe Kennedy, “The Myth of Data Monopoly: Why Antitrust Concerns About Data Are Overblown” (Information Technology and Innovation Foundation, March 6, 2017), https://itif.org/publications/2017/03/06/myth-data-monopoly-why-antitrust-concerns-about-data-are-overblown.

[271].   Paul Solman, “Why tech industry monopolies could be a ‘curse’ for society,” PBS News Hour, January 17, 2019,https://www.pbs.org/newshour/show/why-tech-industry-monopolies-could-be-a-curse-for-society.

[272].   Open Markets, “NYT: How Should Big Tech Be Reined In? Here Are 4 Prominent Ideas,” news release, August 20, 2019, https://openmarketsinstitute.org/clippings/nyt-big-tech-reined-4-prominent-ideas/.

[273].   Robert VerBruggen, “Google, Facebook, Amazon: Our Digital Overlords.”

[274].   Kevin Stankiewicz, “Democratic FTC commissioner: We’re not going to fix Big Tech monopolies with little fines,” CNBC, September 13, 2019, https://www.cnbc.com/2019/09/13/ftc-commissioner-rohit-chopra-on-big-tech-antitrust-investigations.html.

[275].   Jacob Pramuk and Tucker Higgins, “Sen. Elizabeth Warren pushes to break up big tech companies like Amazon and Facebook,” CNBC, March 8, 2019, https://www.cnbc.com/2019/03/08/elizabeth-warren-pushes-to-break-up-companies-like-amazon-and-facebook.html.

[276].   Atkinson, Big is Beautiful.

[277].   Council of Economic Advisers Issue Brief, “Benefits of Competition and Indicators of Market Power,” The White House Archives, April 2016, https://obamawhitehouse.archives.gov/sites/default/files/page/files/20160414_cea_competition_issue_brief.pdf.

[278].   Carl Shapiro, Information Rules: A Strategic Guide to the Network Economy Harvard Business Review, 1998.

[279].   Eduardo Porter, “Where Are the Start-Ups? Loss of Dynamism is Impending Growth,” The New York Times, February 2, 2018, https://www.nytimes.com/2018/02/06/business/economy/start-ups-growth.html.

[280].   Barry C. Lynn and Lina Kan, “The Slow-Motion Collapse of American Entrepreneurship,” Washington Monthly, July/August 2012, https://washingtonmonthly.com/magazine/julyaugust-2012/the-slow-motion-collapse-of-american-entrepreneurship/.

[281].   Atkinson, Big is Beautiful.

[282].   Ian Hathaway, “Platform Giants and Venture-Backed Startups,” October 12, 2018, http://www.ianhathaway.org/blog/2018/10/12/platform-giants-and-venture-backed-startups.

[283].   Oliver Wyman, “Assessing the Impact of Big Tech on Venture Investment,” Marsh & McLennan Companies, July 11, 2018, https://www.oliverwyman.com/content/dam/oliver-wyman/v2/publications/2018/july/assessing-impact.pdf.

[284].   Ian Hathaway, “Platform Giants and Venture-Backed Startups.”

[285].   Jorge Guzman and Scott Stern, “The State of American Entrepreneurship: New Estimates on the Quantity and Quality of Entrepreneurship for 15 US States, 1988–2014,” NBER Working Paper 22095 (Cambridge, MA: National Bureau of Economic Research, March 2016), http://www.nber.org/papers/w22095.

[286].   John Wu and Robert D. Atkinson, “How Technology-Based Start-Ups Support U.S. Economic Growth,” (Information Technology and Innovation Foundation, November 28, 2017), https://itif.org/publications/2017/11/28/how-technology-based-start-ups-support-us-economic-growth.

Back to Top