ITIF Logo
ITIF Search
We Shouldn’t Ask Technologists To Be Arbiters of “Truth”

We Shouldn’t Ask Technologists To Be Arbiters of “Truth”

Since business use of the Internet became widespread in the 1990s, computer security experts have discussed the need for zero trust architectures.[1] The main idea is that the traditional notion of a secure corporate firewall is no longer viable in today’s mobile world. Whether you are a senior executive deep inside company headquarters or a contractor sitting in a Starbucks in a foreign country, the network access process should be essentially the same. Trusting no one hasn’t entirely solved the network trust challenge, but it has surely helped.

Today, it often feels like we are living in a zero trust society. America’s traditional distrust of politicians and news organizations has worsened. It’s also been joined by rising doubts about the police, the FBI/CIA/NSA, the judicial system, corporations, schools, churches, the scientific establishment, the military, and now AI and deep fakes.[2] It’s unfortunate that Americans have become so suspicious in so many areas. But as with network security, a zero trust approach can be a foundation for significant improvement over time.

Big Tech will be central to any such approach. We are constantly told that online services are to blame for dangerous misinformation. If only tech firms were more responsible, the truth would emerge, and societal trust would improve. But this mostly just sets up the technology industry as a convenient scapegoat. In the most important areas, there is often no reliable source of truth, so promoting the “true” and banning the “false” is often impossible. Instead, consumers and policymakers should want technology platforms, content moderators, and algorithms to enable open debates about politically divisive issues. More speech won’t entirely solve the societal trust problem, but it will be better than what we have now.

Power Undermines Trust

As Meta CEO Mark Zuckerberg recently acknowledged, Big Tech’s enforcement of various official truths that turned out to be false has undermined trust in both the leading tech companies and society overall.[3] In addition to their own content moderators, four other organizational entities have been used to determine misinformation, disinformation, and so-called malinformation.[4]  All four have serious shortcomings:

Government. Whether it was the claims that Russia blew up its own Nord Stream pipeline, that the Covid lab leak scenario is a racist conspiracy theory, that the Hunter Biden laptop was Russian disinformation, that the withdrawal from Afghanistan was “an extraordinary success,” or that China’s spy balloon was “more embarrassing than intentional,” a great many Americans simply don’t believe what Washington says.[5] Politicians have always engaged in mis-, dis-, and malinformation (sometimes they pretty much have to), but insisting that governments are a non-debatable source of truth is textbook Orwell.

Mainstream media. Given that U.S. media trust is at an all-time low, how can these organizations possibly serve as accepted arbiters of truth? The New York Times, The Washington Post, and similar news organizations used to be instinctively skeptical of the military, the FBI/CIA/NSA, and other powerful government agencies. Today, it often seems they are all part of the same organization (CNN and MSNBC are almost like a second home for many former intelligence officials).[6] By working so closely with this national security and media nexus, Meta, Alphabet, and Twitter aligned themselves much more with official authorities and power than free speech and societal trust.

The scientific community. During the chaos of the pandemic, many mistakes were made. That’s to be expected. The problem was the censoring and demonizing of any critics, even though questioning, learning, and change have always been the essence of science. Similarly, the constantly repeated mantras of “the science is settled,” “follow the science,” and even “the science” are simply wrong. Like military generals, scientists shouldn’t make final policy decisions because they are ill-equipped to weigh the economic and societal trade-offs. Should we listen to “the science”? Certainly. Should we always follow it? Certainly not. Is there a single major field of science that is “settled”? Changing views are especially important in medicine, whose history has often been about correcting the terrible misunderstandings and malpractices of the past.

Self-appointed guardians. The names are hard to keep straight: The Digital Trust Project; Project Origin: Securing Trust in Media; the Digital Trust and Safety Partnership; the Disinformation Governance Board (disbanded); the Global Disinformation Index (GDI), and many other groups and organizations all say they want to improve digital trust. But they are mostly a mix of traditional institutions and media convinced that new media is the problem. They also tend to be reflections of partisan groupthink. For example, all of GDI’s most trusted sources are left-leaning; all of its least trusted are more on the right.[7]

It’s not that these four groups can’t be trusted; it’s that they all have their own agendas and can’t be trusted to be right all of the time. Questioning, skepticism, and debate are still essential for the free flow of accurate information and ideas and, ultimately, for sustained societal trust. Whether  looking at government agencies, the major media, the scientific community, or supposedly neutral third parties, organizations—like humans—are often reluctant to admit mistakes and can be hostile to their critics. They are also inclined to prioritize their own power and position, even if long-term trust is undermined. They should never be above questioning.

Pressured or Inclined?

Theories vary as to why the tech sector aligned itself so closely with the official truth movement during covid 19. Although many company actions were well-intentioned, there seem to be less noble motivations. When President Biden accused social media firms of “killing people” by spreading misinformation, it was hard for companies not to respond seriously, especially given the looming possibility of changes to Section 230.[8] Others point to the liberal leanings of much of the Big Tech community and see the embrace of censorship as a prime example of Trump Derangement Syndrome (dislike of the former president is so intense that people do things they ordinarily wouldn’t). Lastly, if you hire an army of content moderators, don’t be surprised when they seek to increase and exercise their power.

All of these factors probably played a role. But the bottom line is that enormous censorship pressures were put on Big Tech, which largely succumbed. Of course, tech firms can legally ban just about any content they want. But the public now knows that quite a few claims that were labeled as “false” and often censored—the myocardial effects of covid, the lab accident theory, the dubious efficacy of vaccinating infants and children, the Hunter Biden laptop—have turned out to be either true or quite possibly true.[9] How these serious errors will affect the perceptions of tech firms in the overall marketplace remains to be seen. But the recent Bud Light, Target, and Disney controversies show the potential risks.

Hopefully, the pandemic will prove to be a one-off crisis that led to uniquely extreme measures. However, there are many other topics where official truths are now often asserted. These include election integrity, climate change, school curricula, judicial fairness, crime and immigration statistics, systemic racism, changing gender norms, language usage, and the Ukraine War. In these and other areas, the tech sector has every reason to be skeptical whenever anyone insists that they know the truth, and it’s up to Big Tech to enforce it. Missouri vs. Biden is shaping up as an important test of whether it is even legal for the federal government to pressure private organizations to abridge otherwise perfectly legal speech.[10]

Debating Debates

Elon Musk’s purchase of Twitter is, of course, a potential game-changer regarding how free speech decisions are made. Despite all the initial chaos, Musk’s against-all-odds business successes—PayPal, Tesla/SolarCity, SpaceX/Starlink, and others—and his extraordinary energy and personal determination make him a hard man to bet against. Musk clearly sees that there is both a need—and a potentially big market—for live, in-depth debates with connected audiences, as opposed to the predictable sound bites that characterize most television debates. On the other hand, Musk has been credibly accused of blocking speech that he doesn’t like, and if that continues, it could easily undermine his efforts.[11] Much could depend on the actions and authority of Twitter’s new CEO, Linda Yaccarino.

Two recent events also suggest a shift toward more debate. On June 13, Sean Hannity used his entire one-hour Fox News show to debate California Governor Gavin Newsom. In theory, nothing should be newsworthy about this, except that such debates seldom happen within today’s highly partisan media. Fox viewers, who rarely hear a good word about Newsom, saw an articulate and personable leader more than capable of responding to his critics. Most Newsom supporters would never watch Sean Hannity, but if they saw this show, they would have seen a conservative TV personality trying hard to have a civil discussion with a liberal government official. Although both men often talked over each other, it made for pretty good television, which is why Hannity is trying to arrange a Newsom/DeSantis debate.

The recent efforts to arrange an in-depth debate about whether vaccines can cause autism has raised the role of debates to the popular culture level. Joe Rogan offered $100,000 to Peter Hotez—a pediatrician and expert on vaccines—for the charity of his choice if he would debate Robert Kennedy, Jr. Other individuals have stepped in to raise the pot to over $1 million. Thus far, Dr. Hotez has declined to participate in what he says would be a media spectacle on an already settled topic. But the pressure for someone to accept Rogan’s challenge continues.

More broadly, the unwillingness of the major traditional and social media companies to give any direct airtime to—and even censoring—Kennedy while showcasing efforts to demean him is a classic example of how banning speech can make the target grow stronger. The idea that scientists should only debate other scientists through papers, conferences, and the like is not persuasive in areas where the public has a direct and compelling interest. If Kennedy is so totally wrong about the causes of autism, an extensive debate will expose this more effectively than largely counterproductive bans. As Kennedy has said, at no time in human history have the censors been the good guys.

Looking back, the Great Barrington Declaration should have also been much more openly debated instead of being almost completely banned online. After its launch in October 2020, thousands of scientists and health-care professionals endorsed the view that America’s approach to Covid should be more like Sweden’s: Isolate the elderly and vulnerable, keep schools and businesses open, and trust citizens to live their lives responsibly. As the pandemic recedes and the effects of lockdowns on children and the economy become more obvious, there is renewed debate about whether this approach would have been better for America. Respectfully presenting the Declaration’s views while also explaining why America—and most of the international community—was taking a much more restrictive approach would have led to greater societal trust than the banning and smearing that took place.

Political Speech is Different

To be clear, none of the above says anything about how social media firms should or shouldn’t manage other speech-related challenges such as online decency, violence, bullying, hate speech, or use by children. While these are important issues, they are not central to the societal trust and political polarization debates. The Musk, Kennedy, and Great Barrington examples all involved formidable people in their fields who rarely, if ever, relied on hateful or extreme language and threats. In this sense, purely political speech seems a more manageable challenge than the other free speech domains listed above, often with many gray areas and difficult judgment calls. 

Understanding the need for a more skeptical and objective stance in political areas should be easy for Big Tech. After all, governments, the media, and third-party organizations routinely accuse the technology industry of being greedy, destroying privacy, discriminating by race and gender, manipulating consumers, monopolizing markets, and many other major shortcomings. These charges are, at a minimum, nuanced and worthy of in-depth debate. Often they are much more wrong than right. Yet many of these accusations are now close to being the conventional political and media wisdom. Imagine if the belief that the technology industry was doing more harm than good was deemed an official truth that should no longer be openly questioned. Would Big Tech feel obliged to enforce bans on itself?

About This Series

ITIF’s “Defending Digital” series examines popular criticisms, complaints, and policy indictments against the tech industry to assess their validity, correct factual errors, and debunk outright myths. Our goal in this series is not to defend tech reflexively or categorically, but to scrutinize widely echoed claims that are driving the most consequential debates in tech policy. Before enacting new laws and regulations, it’s important to ask: Do these claims hold water?

About the Author

David Moschella is a non-resident senior fellow at ITIF. Previously, he was head of research at the Leading Edge Forum, where he explored the global impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. Before that, David was the worldwide research director for IDC, the largest market analysis firm in the information technology industry. His books include Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s (DXC, 2018), Customer-Driven IT (Harvard Business School Press, 2003), and Waves of Power (Amacom, 1997).

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit us at www.itif.org.

Endnotes

[1].       Scott Rose, et al., “Zero Trust Architecture,” NIST Special Publication 800-207, August 2020, https://doi.org/10.6028/NIST.SP.800-207.

[2].       David Moschella. “Digital Innovation Isn’t Undermining Societal Trust; It’s the Other Way Around,” ITIF Defending Digital Series, no. 14, Feb 1, 2023, https://itif.org/publications/2023/02/01/digital-innovation-isnt-undermining-societal-trust-its-the-other-way-around/.

[3].       Gabriel Hays, “Zuckerberg Says 'Establishment' Asked Facebook to Censor COVID Misinfo That Ended Up True: 'Undermines Trust',” Fox News, June 9, 2023, https://www.foxnews.com/media/zuckerberg-says-establishment-asked-facebook-censor-covid-misinfo-ended-true-undermines-trust.

[4].       Misinformation is information that is mistaken; disinformation is information that is knowingly false, and malinformation is information that while technically true is either misleading or “not helpful.”

[5].       President Joe Biden, “Biden Calls Afghanistan Withdrawal an 'Extraordinary Success' as He Defends Evacuation Mission,” YouTube, Aug 31, 2021, https://www.youtube.com/watch?v=LzLh2puvsQA; Julia Shapero, “Biden on Chinese Spy Balloon: ‘It Was More Embarrassing Than it Was Intentional’,” The Hill, June 17, 2023, https://thehill.com/homenews/administration/4055470-biden-on-chinese-spy-balloon-embarrassing/.

[6].       Jack Shafer, “The Spies Who Came in to the TV Studio,” Politico, Feb 6, 2018, https://www.politico.com/magazine/story/2018/02/06/john-brennan-james-claper-michael-hayden-former-cia-media-216943/.

[7].       Henry A Brechter, “Misinformation Watch: 'Disinformation Risk Assessment' Lacks Transparency, Shows Bias Against the Right,” AllSides, Feb 20, 2023, https://www.allsides.com/blog/global-disinformation-risk-assessment-shows-media-bias-against-right.

[8].       Zolan Kanno-Youngs and Cecilia Kang, “‘They’re Killing People’: Biden Denounces Social Media for Virus Disinformation,” The New York Times, July 16, 2021, https://www.nytimes.com/2021/07/16/us/politics/biden-facebook-social-media-covid.html.

[9].       Carma Hassan and Helen Regan, “WHO Experts Revise Covid-19 Vaccine Advice, Say Healthy Kids and Teens Low Risk,” CNN Health, March 29, 2023, https://www.cnn.com/2023/03/29/health/who-updates-covid-vaccine-recommendations-intl-hnk/index.html.

[10].     Jenin Young, “America’s Censorship Regime Goes on Trial,” Tablet Mag, April 10, 2023, https://www.tabletmag.com/sections/arts-letters/articles/americas-censorship-regime-goes-on-trial-missouri-biden.

[11].     Katherine Tangalakis-Lippert, “Despite Calling Himself a 'Free Speech Absolutist,' Elon Musk Has a History of Retaliation Against Employees and Critics,” Insider, Mar 26, 2022, https://www.businessinsider.com/free-speech-absolutist-elon-musk-censors-employees-critics-2022-3.

Back to Top