Going, Going, Gone? To Stay Competitive in Biopharmaceuticals, America Must Learn From Its Semiconductor Mistakes

Stephen Ezell November 22, 2021
November 22, 2021
America has lost 70 percent of its semiconductor manufacturing capacity over the last three decades. That serves as a harsh lesson for policymakers: Failing to maintain a supportive policy environment could set up other high-tech industries to falter, too.
Going, Going, Gone? To Stay Competitive in Biopharmaceuticals, America Must Learn From Its Semiconductor Mistakes

Executive Summary

Introduction

U.S. Advanced-Technology Industry Losses

Why did the Federal government Let This Happen?

U.S. Semiconductor Decline

The Biopharmaceutical industry

Policy Recommendations

Conclusion

Endnotes

Executive Summary

The Issue

U.S. leadership in advanced-technology industries is never guaranteed. America once held dominant market shares in a long list of industries—including consumer electronics, machine tools and robotics, telecommunications equipment, and solar panels—only to see those leads significantly erode, and in some cases evaporate entirely. And because process and product innovation are so often joined at the hip, losing production capacity to overseas competitors often leads to loss of U.S. innovation capacity. Some contend it’s acceptable to cede leadership in innovation industries because America will just create new ones. But intensifying global competition, notably from China, now makes such indifference untenable.

America’s loss of semiconductor manufacturing capacity (which has fallen from 37 to 12 percent of global production over the past three decades) and its lag in cutting-edge chip development both are due in significant part to policy inattentiveness. This should serve as a warning for policymakers: Failing to maintain a policy environment that nurtures both innovation and domestic production capability risks sacrificing U.S. leadership in other advanced-technology industries, such as biopharmaceuticals.

ITIF’s Analysis and Findings

America’s experience with the semiconductor industry is especially telling because it’s an industry the United States wholly created and led, yet it lost leadership to Japanese competitors in the late 1970s. The industry recovered competitiveness in the 1980s, in part through effective policies like SEMATECH and research and development (R&D) tax credits, but once again allowed that position to erode over the subsequent three decades, to such an extent that policymakers are now calling for a $50 billion investment in the form of CHIPS Act to restore domestic semiconductor manufacturing capacity and innovation capability.

The tenuous nature of U.S. leadership in advanced-technology industries is also illustrated by America’s experience with biopharmaceuticals. Until the latter half of the 1970s, Europe led in this industry, creating more than twice as many new-to-the-world drugs. But by the 2000s, that had shifted, and the United States led the world. This shift was not due principally to differences in firm performance, but to a suite of policies that made Europe less competitive: Stringent regulations on biotechnology made Europe less attractive for biotech drug developers. European regulations also significantly limited drug company mergers, making it difficult for European firms to gain needed scale as the industry started to globalize after the 1970s. Finally, and most importantly, Europe began to impose stringent drug-price controls that meant the U.S. competitors could earn and reinvest more in R&D.

It also helped that the United States adopted an array of favorable policies, including increased funding for National Institutes of Health; tax incentives to encourage biomedical investment; policies like the Bayh-Dole Act to encourage biopharma technology transfer from universities to companies. And unlike Europe, U.S. policymakers did not impose draconian price controls, so innovators could earn sufficient revenues to continue investing in future biomedical innovations.

By the 1990s, most experts in Europe were bemoaning the loss of EU biopharma competitiveness to the United States. But competitive advantage can be fleeting, and from 2003 to 2017 the United States lost at least 22 percent of its drug manufacturing capacity. The COVID-19 pandemic revealed increasing U.S. dependence on foreign suppliers, especially for many active pharmaceutical ingredients. And while the United States is still the global leader in biopharma innovation—as evidenced by the fact that the two most effective COVID-19 vaccines are American—other nations, especially China, are beginning to challenge that leadership. On top of this, many of the policies that enabled the United States to wrest leadership from the EU are now under serious threat—including the threats of stringent drug-price controls, weaker intellectual property rights, and fewer tax incentives for new drug development.

U.S. policymakers did not learn from a half-century of innovating and then losing a host of advanced industries to foreign nations. But the loss of competitive advantage in semiconductors can serve as a wake-up call. The lesson should be that policymakers can never take the health of America’s advanced-technology industries for granted, or even worse, impose policies that weaken that advantage. If they do, then the all-too-real risk is that they will find themselves a decade later having to contemplate a similar $50 billion package to restore the biopharmaceutical industry.

Summary of Policy Recommendations

  • For the semiconductor industry, pass the $52 billion CHIPS Act, which includes $39 billion in incentives for new fabs, $10 billion for R&D, and investment tax credits.
  • For the biopharmaceuticals sector, refrain from introducing drug price controls schemes, such as HR3, Build Back Better, or international reference-based price controls.
  • Increase NIH funding to at least $50 billion annually and expand R&D investments in biopharma process innovations at programs like Manufactuing USA and NSF agencies.
  • Restore biopharmaceutical manufactuirng-stimulating tax credits like Section 936, expand the R&D tax credit, and establish an investment tax credit for new manufacutring plant and equipment, inluding for pharmaecuticals.

Introduction

The United States has a long history of being the first to develop innovative industries, but then losing production to other nations with more effective policies, and eventually, because of that loss, becoming an industry laggard. We have seen this in sectors such as consumer electronics, machine tools and robotics, nuclear reactors, telecommunications equipment, solar panels, and now potentially in semiconductors and biopharmaceuticals.[1] In every case, these losses were eminently preventable had there been effective federal policy.

The damage to the United States from this dynamic had been somewhat tolerable because the United States continued to develop new industries as it shed others. But given intense global competition, especially with China growing as a global adversary, the United States can no longer afford to take a hands-off posture. Too many other nations are now focused intensely on the innovation phase in foundational and emerging technologies, effectively limiting U.S. global market share. Many are subsidizing innovation-industry production, which in turn weakens both U.S. innovation and production.

This dynamic, coupled with an increasingly “asset-lite” and short-term orientation of many U.S. advanced-industry companies, along with diminishing U.S. government support for innovation industries and increased U.S. government attacks on them (e.g., aggressive antitrust policies, tax increases, regulations, and price controls), will invariably mean the “UKization” of the United States economy—following the United Kingdom path where that economy first lost production and then innovation capabilities in most industries, and as a result the country struggles to compete globally on anything more than tourism and finance.[2]

The United States now faces two choices. Policymakers can continue to turn a blind eye to this increasingly damaging dynamic and be indifferent to U.S. industrial structure, believing “potato chips, computer chips—what’s the difference?” With a fundamentally weak U.S. industrial structure, each political camp has fallen back on shortcuts, hoping to artificially prop up stagnant living standards with tax cuts (if you are a Republican) or spending increases (if you are a Democrat). Failure to address the underlying problem of diminished U.S. competitiveness in key industries will mean lower real income and gross domestic product (GDP) growth, an even larger trade deficit (or a significant decline in the value of the dollar and an increase in the cost of imports), more foreign supply chain dependency, and a deterioration of the national defense technology base.

While it is too late to restore many advanced technology industries America has already lost, it’s by no means too late to retain U.S. biopharma innovation leadership and restore domestic production. 

Or policymakers can realize that U.S. leadership (including in both innovation and production) is by no means assured and that the United States has no natural “right” to these industries and the jobs they support because of some inherent U.S. advantages. Recognizing this means not only putting in place policies to ensure U.S. leadership in foundational and emerging technology industries, but at minimum, avoiding “shooting ourselves in the foot” and handing key industries to other nations by enacting harmful U.S. regulatory or tax policies.

The former scenario is increasingly confronting America’s biopharmaceutical industry today, where the United States still has enormous strengths, but could very well lose much of
the industry in the next two decades if U.S. policy does not work to shore up America’s
global competitiveness.

The good news is that it is now possible for policymakers to see the likely path if they don’t act; they just have to look at many U.S. advanced-technology industries in the past, like telecom equipment, where U.S. leadership has eroded or vanished. While it is too late to restore many advanced technology industries America has already lost, it’s by no means too late to retain U.S. biopharmaceutical innovation leadership and restore domestic production.

The bad news is that many policymakers pay almost no attention to the competitive position of the biopharmaceutical industry, focusing instead on policies that would accelerate that decline, such as weaker intellectual property (IP) protections, reduced tax incentives, and stringent price control measures. We have seen this scenario play out before: Europe followed this playbook in the 1980s and early 1990s, putting in place strict drug price controls and regulatory barriers to innovation, and the result was ceding half-century-long leadership to the United States. Suppose the United States follows the EU’s path, as many in Congress appear to want to do. In that case, these measures will increase the odds of other nations, especially China, capturing market share in this critical innovation-based industry. The result will be fewer good jobs, a larger trade deficit, less drug innovation, higher overall health costs, and more foreign supply-chain dependency.[3]

This report starts by articulating why U.S. leadership in advanced-technology industries matters and why the U.S. position in such industries isn’t guaranteed. It then examines the factors that have contributed to the erosion of the U.S. semiconductor manufacturing base as a possible scenario for the future of the U.S. biopharmaceutical industry. It then explores how similar trends have emerged in the latter sector. It contends that policymakers must be committed to maintaining a supportive policy environment for America’s innovation-based industries, closing with a set of recommendations for policymakers to maintain U.S. biopharmaceutical innovation and production leadership.

U.S. Advanced-Technology Industry Losses

The United States emerged from World War II (WWII) as the world’s leading industrial economy. This was not, despite the popular view, principally due to the destruction of foreign nations’ production capabilities during the war, which were largely “built back better” by the late 1950s. Rather, the United Sates had built up core technology strengths from the Civil War to WWII, and during the Cold War, with massive federal investments accelerating those capabilities.

It’s easy to forget that in virtually every advanced industry, the United States dominated through the 1960s. But there are a litany of industries where the United States once held dominant market share in the post-WWII period, only to see those leads significantly erode, and in some cases evaporate entirely. Consider machine tools. In 1965, American machine tool manufacturers held 28 percent of the world market, a share that has cratered to less than 5 percent, as machine tools transformed from a U.S. export to an import industry, and one that imported more than twice as many goods ($8.6 billion) than it exported ($4.2 billion) in 2018.[4]

Similarly, Western Electric (which became Lucent) once commanded 59 percent of the global market for telecommunications equipment, but America had lost the industry entirely by the first decade of this century, in significant part because of a long legacy of policy failures.[5] In 1996 four of the top five global personal computer (PC) makers were headquartered in the United States.[6] Today, only three of the top six PC producers are U.S. headquartered, with the largest (China’s Lenovo, formerly IBM) now headquartered in China. The United States once led in consumer electronics, but other nations, especially Japan and South Korea, now dominate, with China potentially taking that position going forward.

There are a litany of industries where the United States once held dominant market share in the post-World War II period, only to see those leads significantly erode, and in some cases evaporate entirely.

Elsewhere, the United States went from accounting for over 70 percent of commercial jet aircraft exports in 1991 to just 39 percent by 2009.[7] And the United States is losing share even in many new technologies. For instance, from 2006 to 2013, the United States’ share of the global solar photovoltaics cell market fell by nearly 75 percent.[8]

Evidence of faltering U.S. advanced manufacturing competitiveness shows up clearly in the trade statistics, where the United States went from consistently holding a positive trade balance in advanced technology products (ATP) in the 1980s and 1990s until terms of trade turned negative in the 2000s, with America’s annual ATP deficit now nearing $200 billion. (See figure 1.) Using a broader definition, the National Science Foundation (NSF) estimated a U.S. trade deficit in advanced technology products of over $300 billion in 2020.[9]

Figure 1: U.S. trade balance in advanced technology products, 2000–2020 ($ millions)[10]

Chart, line chart

Description automatically generated

Why did the Federal government Let This Happen?

To be sure, the loss of U.S. leadership in various American advanced-technology industries has many causes, including miscalculations made by businesses. But all too often the underlying causes have been foreign governments “buying industry share” with U.S. policymakers standing on the sidelines, ignoring the damage and believing that any result was simply a result of the workings of the free market.

There are a number of reasons for this somnambulism. One is economic pundits who contend that America does not need an advanced industrial base. For instance, when asked how much manufacturing the United States could really lose and still be economically healthy, the head of one Washington, D.C.–based international economics think tank replied: “Really? Really we could lose it all and be fine.”[11] Likewise, former Obama economic policy head Larry Summers stated: “America’s role is to feed a global economy that’s increasingly based on knowledge and services rather than on making stuff.” It’s hard to blame policymakers for being inattentive to the state of U.S. advanced industry manufacturing with so many economists and think tanks telling them it doesn’t matter.

In other cases, the dominant narrative of “we’re America so we’re always destined to lead” has meant that policy can focus on other matters, such as regulating drug prices. After all, in this view innovation takes care of itself. Related to this is an unwillingness to believe, or accept, that many nations are “buying” global market share with subsidies and, in China’s case, pursuing innovation-mercantilist, “power trade”-based economic and trade strategies specifically designed to wrest control of advanced industries from the United States and other nations.[12] As Larry Summers purportedly said, if the Chinese are dumb enough to subsidize key industries, we should thank them for cheaper imports. This, of course, ignores that most consumers are also workers, and that the United States cannot be dependent on China for many advanced products if it wants to maintain its own autonomy.

Related to this is the widespread view that it doesn’t matter if the United States loses global production and market share in advanced-technology industries—America will simply invent new ones, as it did with biotechnology or the Internet economy (where by 2015 U.S.-headquartered digital platform companies held an estimated two-thirds of global market capitalization).[13] If America loses leadership in these, the narrative goes, it’ll just build new ones in areas like artificial intelligence (AI), quantum computing, nanotechnology, hypersonic technologies, etc. Some even go as far as to say that is it good that the United States sheds these advanced industries, as it’s proof of some natural evolution to even more advanced technologies.

But that process of shedding somewhat mature advanced-technology industries and growing the next generation of new ones is no longer as straightforward as it once was. As recently as two decades ago, few nations outside of East Asia (i.e., Japan, Singapore, South Korea, and Taiwan) had robust advanced industry and technology strategies designed to capture market share in emerging, next wave industries and technologies. Today, all advanced countries, and many emerging ones, do. And of course, China is the most aggressive and is gaining in emerging technologies like quantum computing, AI, hypersonics, and biotechnology, backed by aggressive government support policies.[14] For example, at least two dozen nations have national AI competitiveness strategies, and while a 2019 Center for Data Innovation report found the United States remains in the lead overall across six categories of AI metricstalent, research, development, adoption, data, and hardware—it found China rapidly catching up with the
United States.[15]

Moreover, in 2019 the Information Technology and Innovation Foundation (ITIF) examined 36 indicators of China’s scientific and technological progress vis-à-vis the United States a decade ago versus today, to get a sense of where China is making the most progress, and to what extent it is closing the innovation gap with the United States. This analysis found that China has made progress on all indicators, and in some areas it now leads the United States. In fact, averaging all the indicators, China has cut the gap with the United States by a factor of 1.5 from the base year to the most recent year.[16] In other words, in the span of about a decade, China has made dramatic progress in innovation relative to the United States. In short, there’s no guarantee the United States and its enterprises will lead the industries of the future.

Retaining and recapturing jobs in manufacturing and other advanced-technology industries must represent a central focus of any worker-centered U.S. trade and economic strategy.

Still others argue that if the United States for some reason needs to regain lost advanced-technology production it should be able to do so relatively easily, especially if the dollar were to significantly decline in value, making U.S. exports more price competitive. But the reality is that advanced industries are not simple ones like call centers. Once leadership in advanced-technology industries is lost, it’s incredibly difficult and expensive to reconstitute and regain, if that’s even possible. As one Industry Week article notes, “Some of the industries [the U.S. is losing competitiveness in], such as textiles, apparel, furniture, hardware, magnetic media, computers, cutlery, hand tools, and electrical equipment, have been declining for many decades and are probably beyond recovery.”[17] For instance, if Boeing were ever to go out of business the United States could not rely on market forces, including a steep drop in the value of the dollar, to later re-create a domestic civil aviation industry. To do so would require not only creating a new aircraft firm from scratch but also the complex web of suppliers, professional associations, university programs in aviation engineering, and other knowledge-sharing organizations. With fewer aviation jobs, fewer students would become aeronautical engineers, making it difficult to rebuild capacity. If a country loses the intangible knowledge about how to build an airplane, it cannot reconstitute it without massive government subsidies and almost complete domestic purchase requirements.[18]

Finally, some argue that advanced industries don’t employ large numbers of jobs and that other industries, especially low-paid service industries, are our future. But the U.S. challenge is not the number of jobs, but the quality. And manufacturing and other advanced-technology industries represent a source of high-skill, high-value-added, high-paying jobs. That’s why U.S. manufacturing jobs paid 19.2 percent more than the average U.S. job in 2020, and why advanced-technology industries paid 75 percent more.[19] Similar research has found that the earnings premium for jobs in export-intensive U.S. manufacturing industries averages 16.3 percent.[20] Retaining and recapturing jobs in advanced-technology industries must represent a central focus of any worker-centered U.S. trade and economic strategy.

U.S. Semiconductor Decline

Semiconductors, sometimes referred to as integrated circuits (ICs) or microchips, consist of transistors that amplify or switch electronic signals and electrical power and thus constitute an essential component of electronic devices, powering everything from automobiles and airplanes to medical devices and home appliances.[21] Leading-edge semiconductors contain circuits measured at the nanoscale (“nm,” a unit of length equal to one millionth of a meter), with the newest fabrication facilities producing semiconductors at 5 nm and 3 nm scales.[22] The global semiconductor industry, itself valued at $551 billion in 2021, helps create $7 trillion in global economic activity and is directly responsible for $2.7 trillion in total annual global GDP.[23]

The Rise of U.S. Semiconductor Leadership

As America’s experience with the semiconductor industry, just like any other advanced-technology industry, shows, leadership is never assured: indeed, the United States has created and led, lost, and regained global leadership in semiconductor innovation and production, only to see it, in some dimensions, increasingly slip away again.

The invention of the semiconductor was a uniquely American phenomenon.[24] In 1947, Bell Labs’ John Bardeen, Walter Brattain, and William Shockley invented the transistor, a semiconductor device capable of amplifying or switching electronic signals and electrical power, and for which they would win a Nobel Prize in 1965. Bell Labs could support this fundamental, yet groundbreaking research because it was part of the AT&T monopoly and had the luxury and resources to focus on long-term, technical challenges.[25]

The United States created and led the semiconductor industry, then lost leadership and regained it, only to see it, in some dimensions, increasingly slip away again.

Because of his dissatisfaction at Bell Labs, Shockley moved to what is now Silicon Valley to start Shockley Semiconductors, which soon spun off talent that started other firms, including Fairchild Semiconductor. In the mid-1950s, Jack Kilby at Texas Instruments and Robert Noyce and a team of researchers at Fairchild each independently pioneered the integrated circuit, placing multiple transistors on a single flat piece of semiconductor material, giving rise to the modern visage of a “semiconductor chip.”[26] In 1968, Robert Noyce and Gordon Moore—who in leaving Shockley Semiconductor had been among the founders of Fairchild Semiconductors in 1957—founded Intel, with the help of venture capital (VC) provided by Arthur Rock, a seminal moment that helped give rise both to Silicon Valley and the modern VC firm and its capitalization model.[27] But without the presence of a key buyer—in this case, the United States Air Force—the picture would have been much different. The U.S. Air Force needed high-performance semiconductors for missiles, jets, and early-detection systems like radar and was able to pay the higher prices that were involved. This core customer enabled firms like Fairchild to get enough learning and scale to keep the technology progressing until prices and performance fit the commercial market. No other nation could match that combination of risk-taking (individuals leaving good jobs to start their own companies), venture capital, and lead customer (the Defense Department).

Throughout the 1960s and 1970s, U.S. semiconductor enterprises, led by Texas Instruments, Fairchild Semiconductor, National Semiconductor, and Intel, among others, “dominated worldwide production of semiconductors.”[28] By 1972, the United States accounted for 60 percent of global semiconductor production (and 57 percent of consumption).[29] The industry was very much one in which innovation and scale provided important leads that would be difficult for foreign firms to match.

The Japan Challenge

However, beginning in the latter half of the 1970s and into the 1980s, U.S. semiconductor industry competitiveness began to wane, particularly in the face of withering competition from Japanese players—notably Fujitsu, NEC, Hitachi, Mitsubishi Electric, and Toshiba—and especially in the dynamic random access memory (DRAM) chip sector. By the mid-1980s, Japanese players had captured the majority of the global DRAM market. By the late-1980s, across all memory, logic, and analog chips in the global semiconductor market, Japan’s global market share in terms of sales eclipsed 50 percent while the United States’ fell to less than 40 percent. (See figure 2.) Japan’s burgeoning competitiveness was the result both of astute technical engineering and intense government support. The latter included robust research and development (R&D) investment in the sector, subsidized borrowing, and tax incentives for investment.[30] Japan also benefited from protectionist trade policies including shielding Japanese competitors with market-access restrictions on U.S. and other foreign DRAM competitors so Japanese players could reach scale at home and then export into global markets, abetted by below-cost pricing in foreign markets, a playbook aggressively copied by China today across a range of high-tech industries.[31]

Had Japan been a traditional free-market economy with firms focused on profit maximization, it is likely that it would not have been able to make inroads into the U.S. market share. The reason is simple: catching up to U.S. producers in scale and innovation was only possible through a combination of protection from competition, government subsidies (including keeping the value of the yen lower than it otherwise would have been), and a willingness by companies to suffer losses for a long time. As Charles Kaufman writes, “The Japanese chip makers could withstand continuing losses because all were units of keiretsu trading groups with deep pockets. They shared a determination to use their excess capacity to gain prized semiconductor market share no matter what the cost. It has been estimated that the Japanese semiconductor industry lost over $4 billion through memory chip dumping during the 1980s.”[32] In contrast, by the 1980s, the role of the U.S. government in the industry was minimal, the United States had no trade protection, and U.S. semiconductor companies were keenly focused on short-term profits. It was this divergence in practices that led the Japanese competitors to gain market share so quickly.

Figure 2: Global semiconductor market share, by revenues, 1982–2019[33]

Chart, line chart

Description automatically generated

However, while Japan’s innovation mercantilist practices were real, so too was the reality that U.S. semiconductor manufacturing practices had faltered and Japanese players were producing more-reliable, more-defect free chips at a lower price point than their U.S. competitors.[34] By 1987, the Defense Science Board’s Task Force on Semiconductor Dependency found U.S. leadership in semiconductor manufacturing to be rapidly eroding and that not only was “the manufacturing capacity of the U.S. semiconductor industry … being lost to foreign competitors, principally Japan … but of even greater long-term concern, that technological leadership is also being lost.”[35]

In response, in 1987 the U.S. industry and government collaborated to establish SEMATECH, a public-private research consortium that sought to help improve U.S. industry’s technological position by developing advanced manufacturing technology, with a particular focus on increasing the speed and quality of chip production systems.[36] Congress provided approximately $870 million, principally channeled through the Defense Advanced Research Projects Agency (DARPA) from FY 1986 to 1996, with those contributions matched by contributions from 14 industry participants.[37] SEMATECH focused on applied R&D and its only product was generic manufacturing technology, not the development of semiconductors themselves. Notable SEMATECH achievements included that by 1993 U.S. device makers could manufacture chips at 0.35 microns using all-American-made tools and by 1994 the United States had recaptured semiconductor device market share leadership over Japan (48 percent to 36 percent).[38] SEMATECH also set a goal of reducing generational advantages in chip miniaturization from three years to two, a goal the industry has achieved consistently since the mid-1990s.[39] According to the National Academy of Sciences, “SEMATECH was widely perceived by industry to have had a significant impact on U.S. semiconductor manufacturing performance in
the 1990s.”[40]

Even before SEMATECH, in 1982, the Semiconductor Research Corporation (SRC) formed as a cooperative for implementation of research activities responding to the generic needs of the integrated circuit industry.[41] As SRC CEO Ken Hansen explains, “SRC launched in 1982 with a mission to fund university research in the pre-competitive stage to leapfrog the technology disadvantage we felt at the time and to develop a workforce pipeline of well-educated Ph.D. students working on industry-relevant topics.”[42] SRC’s experience has shown that university research can provide substantial contributions to the advancement of semiconductor technology as well as provide additional workforce to enhance the industry, university, and government technical infrastructure of the United States.[43] SRC continues today, now running the Joint University Microelectronics Program (JUMP), which focuses on high-performance, energy-efficient microelectronics in partnership with DARPA and also the nano-electronic Computing Research program in partnership with NSF and the National Institute of Standards and Technology (NIST).[44]

The United States has lost over 70 percent of its share of global semiconductor manufacturing capacity over the past three decades.

The government took additional steps to bolster the competitiveness of the U.S. semiconductor industry, including the 1984 Cooperative Research and Development Act, the Federal Technology Transfer Act of 1986, the Technology Transfer Improvements and Advancement Act, the Technology Transfer Commercialization Act, and the Omnibus Trade and Competitiveness Act in 1988.[45] On the trade front, in 1986 the U.S. government negotiated the U.S.-Japan Semiconductor Agreement, which called for an end to Japanese dumping and (at least partial) opening of the Japanese market to foreign producers.[46]

At the same time, U.S. firms took needed action to restore their competitiveness. Perhaps the most important was Intel’s decision to specialize in logic chips to power the emerging PC revolution in the 1990s.

In short, the recovery of the U.S. semiconductor industry in the 1990s—which played a pivotal role in laying the groundwork for the Internet era and the advent of the modern digital economy—was the result of intentional and concerted public policies, effective public-private partnerships, and industry executives’ willingness to make long-term investments to restore the sector’s competitiveness.

The U.S. Semiconductor Industry Today

While the United States retains many strengths in the semiconductor industry, especially on the R&D and innovation side of the ledger, it has faltered considerably with regard to domestic semiconductor production.

Over the last four decades, U.S.-headquartered semiconductor firms have built many more fabs outside the United States than inside, in large part due to generous production subsidies offered by foreign governments seeking a share of this critical industry. No U.S. semiconductor Chief Executive Officer could keep the job if he or she had not taken advantage of these subsidies. By 2021, the U.S. share of global semiconductor production had fallen from 37 percent in 1990 to 12 percent. (See figure 3.)

Figure 3: Global manufacturing capacity by location[47]

Chart

Description automatically generated

At current trends, with just 6 percent of new global semiconductor capacity development expected to be located in the United States over this decade, absent effective policy intervention, the U.S. share of global semiconductor manufacturing capacity is expected to fall to 10 percent by 2030. In summary, the United States has lost over 70 percent of its share of global semiconductor manufacturing capacity over the past three decades. Conversely, whereas China held barely 1 percent of global semiconductor manufacturing capacity in 2000, by 2010 this share had grown to 11 percent, and to 15 percent by year-end 2020, with that share forecast to increase to 24 percent by 2030.

While global production has grown offshore and declined in the United States, U.S. firms were still able to lead in innovation, at least until recently. U.S. semiconductor enterprises’ R&D intensity, at 18.6 percent, outpaces that of global peers: with European-headquartered ones at 17.1 percent; Japan’s at 12.9 percent; South Korea’s 9.9 percent; and China’s 6.8 percent.[48] (See figure 4.)

Figure 4: Global investment by firms on semiconductor R&D as a share of sales, by country/region, 2020[49]

Chart, bar chart

Description automatically generated

When it comes to patenting the picture is less robust. While U.S.-headquartered enterprises (or other entities) received 44 percent of the semiconductor patents awarded by the U.S. Patent and Trademark Office (USPTO) in 1995, by 2018 this share had fallen to 29 percent. In contrast, over that period, Taiwanese-based applicants saw their share increase from 4 to 17 percent, South Korean ones from 4 to 14 percent, and Chinese ones from less than 1 to 6 percent. (See figure 5.)

Figure 5: Share of USPTO semiconductor patents granted by country/region, 1995 and 2018[50]

Chart, bar chart

Description automatically generated

The erosion of U.S. innovation capabilities has been particularly apparent with regard to advanced chip production capabilities. While Intel remains the world’s leader in logic chip market share and America’s leading logic chip maker, is has slipped off the leading pace. TSMC is now producing 5 nm chips and expects to enter the volume production phase for 3 nm chips by the second half of 2022.[51] In contrast, in July 2020, Intel announced that it had fallen at least one year behind schedule in developing its next major advance in chip-manufacturing technology.[52] (That is, in moving from 10 nm to 7 nm technology; although, effectively Intel’s 7 nm architecture in performance will be roughly equivalent’s to TSMC’s 5 nm, and it’s important to remember that while process node size is indicative, it’s not necessarily reflective of the actual performance features of a given chipset.) Nevertheless, Intel has vowed to catch up, and in August 2021 it released an aggressive technology roadmap that promises significant improvements in technology performance, efficiency, and architecture in the upcoming Intel chipsets through 2025. The company plans to ship Intel 4, its first 7 nm chipset, starting in early 2023 and to follow on with Intel 3, which it expects to deliver an 18 percent performance increase over the Intel 4, in the latter half of 2023. By 2024, Intel plans to release Intel 20A, with the “A” referring to an angstrom, a unit of length equal to 0.1 nanometers, based on significantly new chip architectures.[53]

But the reality is that the vast majority of the world’s most sophisticated semiconductor logic chips, those at the sub 10 nm process node level or below, are manufactured in Asia, where Taiwan (largely due to TSMC) held a 92 percent share in 2019 and South Korea the remaining 8 percent. (See figure 6.) In other words, for an industry it invented, the United States had clearly fallen off the leading edge in domestic semiconductor manufacturing.

Figure 6: Share of global semiconductor wafer manufacturing capacity by region (2019, %)[54]Chart, bar chart

Description automatically generated

One reason for this falloff is that the costs of advanced fab development are so great—it costs $20 billion or more to build the latest 5 or 3 nm fabs—that it’s hard for American semiconductor makers to justify these expenses, especially as U.S. financial markets value asset-light companies that shed hard capital assets. (See figure 7.) That’s a large part of the reason why whereas almost 30 companies manufactured integrated circuits at the leading edge of technology 20 years ago, only 5 do so today (Intel, Samsung, TSMC, Micron, and SK Hynix). But another reason is that the erosion of production capabilities overseas has hurt innovation capabilities.

Figure 7: Average cost to build a new foundry/logic fab (US$, billions)[55]

Chart, bar chart

Description automatically generated

Explaining the Decline in Leading-Edge U.S. Semiconductor Production

A number of factors explain faltering U.S. leadership on the leading-edge of semiconductor production. To be sure, a great degree of it stems from ever-intensifying foreign competition, which has in many cases enjoyed considerable government support and investment, although it’s also stemmed from disruptive innovators like Taiwan’s TSMC, which pioneered the innovative fabless business model. However, faltering U.S. leadership has also resulted from failures, or at least errors or miscalculations, on both the public and private side of the U.S. ledger.

Foreign Investment Incentives

Frankly, other countries are willing to subsize the building of semiconductor fabs, whereas the United States is largely not. That explains much of the U.S. decline. Many countries help companies defray the high costs of building a fab, with incentives that reduce up-front capital expenditures on land, construction, and equipment and that can also extend to recurrent operating expenses such as utilities and labor. Foreign government incentives may offset from 15 to 40 percent of the gross total cost of ownership (pre-incentives) of a new fab, depending on the country.[56] The 10-year total cost of ownership (TCO) of U.S.-based semiconductor fabs is 25 to 50 percent higher than in other locations, with government incentives accounting for 40 to 70 percent of the U.S. TCO gap.[57] (See figure 8.)

Figure 8: Estimated 10-year TCO of reference fabs by location (U.S. indexed to 100)[58]

Chart, bar chart

Description automatically generated

China’s semiconductor industry has received over $170 billion worth of government subsidies, which China has used both to stand up entirely new companies from scratch and to finance the acquisition of foreign competitors.

Other countries are willing to subsize the building of semiconductor fabs, whereas the United States largely is not. That explains much of the U.S. decline.

In many of these countries, such as Japan, South Korea, and Singapore, such incentive packages are offered at the national Ministry of Economy level to attract globally mobile semiconductor investment (in China such packages are offered at the national, provincial, and regional levels). For example, Korea recently announced a program of 40 to 50 percent tax credits for chip R&D and 10 to 20 percent tax credits for facility investments, as well as low-cost loans therefore.[59]

China is even more generous. Its semiconductor industry has been the recipient of over $170 billion worth of government subsidies, which China has used both to stand up entirely new companies from scratch and to finance the acquisition of foreign competitors.[60] An Organization for Economic Cooperation and Development (OECD) study of 21 international semiconductor companies from 2014 to 2018 found that Chinese companies received 86 percent of the below-market equity provided by their nations’ governments.[61] Considering state subsidies at the firm level—that is, as a percentage of revenue for semiconductor manufacturers (from 2014 to 2018)—Chinese enterprises clearly led their foreign competitors by an order of magnitude. State subsidies accounted for slightly over 40 percent of Semiconductor Manufacturing International Corporation’s (SMIC) revenues over this period, 30 percent for Tsinghua Unigroup, and 22 percent for Hua Hong. (See figure 9.) In contrast, this figure was minimal for TSMC, Intel, and Samsung, each for whom revenues identifiable as state subsidies accounted for, at most, 3 percent or less of their revenues over this period. Of particular import, the OECD study found that there “notably appears to be a direct connection between equity injections by China’s government funds and the construction of new semiconductor fabs in the country.”[62]

Figure 9: State subsidies as a percentage of revenue for chip fabs, 2014–2018[63]

Chart, bar chart

Description automatically generated

Another example pertains to China’s efforts to build leadership in memory technologies such as DRAM and NAND. For instance, Yangtze Memory Technologies (owned by the state-backed Tsinghua Unigroup) announced that by year-end 2020 it had tripled its production to 60,000 wafers per month (wpm), equivalent to 5 percent of global output, at its new, $24 billion plant in Wuhan.[64] Similarly, ChangXin Memory Technologies, also a state-funded company, announced that in 2020 it would quadruple production of DRAM chips to 40,000 wpm (or 3 percent of world DRAM output) at its $8 billion facility in Hefei.[65]

The 10-year TCO of U.S.-based semiconductor fabs is 25 to 50 percent higher in the United States than in most other countries, with government incentives directly account for 40 to 70 percent of the U.S. TCO gap.

The bottom line: because of this intense competition, if a nation wants to maintain or expand its semiconductor production it must pay for it. To date, the United States has not been willing to do that, and it has paid the price. To be sure, some U.S. states have put together elements of incentive packages, but because of state fiscal constraints they are quite modest in size. The historical inability to offer attractive incentive packages explains why two of the most-significant elements in the CHIPS package are a $10 billion federal program that matches state and local incentives offered to a company for the purpose of building a semiconductor foundry with advanced manufacturing capabilities as well as a 40 percent investment tax credit for semiconductor equipment and facility expenditures.

Foreign Innovation Mercantilism

Some nations, especially China, complement subsidies with unfair trade and economic policies. These include policies such as forced technology or IP transfer or domestic production as a condition of market access, IP theft, and demands to produce locally as a condition of market access. Japan practiced some of these measures (e.g., closed markets, product dumping, etc.) in the 1980s. But China’s actions make Japan’s look like child’s play, as ITIF writes in “Moore’s Law Under Attack: The Impact of China’s Policies on Global Semiconductor Innovation.”[66]

For instance, the acquisition of foreign semiconductor technology through IP theft has been a key pillar of Chinese strategy. One assessment found that China’s SMIC alone has accounted for billions in semiconductor IP theft from Taiwan.[67] Chia also regularly coerces technology transfer in the semiconductor industry. As the OECD observed, “[T]here is also unease in the [semiconductor] industry regarding practices that may amount to forced technology transfers, whereby government interventions create the conditions where foreign firms may be required to transfer technology to local partners or to share information that can be accessed by competitors.”[68] A 2017 survey conducted within the semiconductor industry by the U.S. Department of Commerce’s Bureau of Industry and Security found that 25 U.S. companies—which accounted for more than $25 billion in annual sales—had been required to form joint ventures and transfer technology, or both, as a condition of Chinese market access.[69]

U.S. Innovation System and R&D Weaknesses

Part of the reason the United States has fallen off the leading edge in semiconductor manufacturing and performance stems from its own missteps, including with regard to R&D and innovation policy. Here, perhaps the most fundamental lacunae has been faltering federal R&D investments. In 1978, U.S. federal investment into semiconductor R&D totaled 0.02 percent of GDP. While 40 years ago, this investment was on par with private levels, federal R&D investment for semiconductors in 2018 rose only one-hundredth of a percentage-point, to about 0.03 percent of GDP. Meanwhile, U.S. private investment in semiconductor R&D has steadily grown over the last 40 years, totaling about 0.19 percent of GDP in 2018.[70]

In industries like biotechnology and semiconductors, product and process innovations are increasingly joined at the hip, and if production leaves U.S. shores, it hampers both process and product innovation, leading to a “make there, innovate there” paradigm.

Alex Williams and Hassan Khan point to deeper structural problems in the organization of the U.S. science and innovation policy system. They contend that in the 1990s the United States essentially tried to conduct “science policy” on the cheap, where “policy privileged research, design, and ideas over implementation, production, and investment.”[71] Ultimately, Williams and Khan’s critique is quite similar to those of Bill Bonvillian and Suzanne Berge at MIT (and others like Greg Tassey, former NIST senior economist), contending that over time innovations in process and product innovations in industries like semiconductors or biotechnology become joined at the hip and inseparable from one another, and as the manufacturing (i.e., process innovation) part of the equation increasingly left American shores then America fell further behind on product innovation as well.[72]

The Biopharmaceutical industry

The U.S. biopharmaceutical industry increasingly looks like it is on the same path that the U.S. semiconductor industry has been on: It is a laggard in production, and it faces growing threats to innovation leadership. Just as U.S. semiconductor leadership can no longer be taken for granted, neither can continued U.S. leadership in the life-sciences, especially if U.S. policymakers fail to respond to foreign policies to promote the industry within their borders, weaken positive programs in the United States, and enact harmful policies (e.g., weaker IP protections and strong drug price controls).

As this section will show, the United States turned itself from a global biopharmaceutical laggard into the leader, helped considerably by harmful European policies, which U.S. policymakers now appear to want to copy. Taking the industry for granted and believing that government can impose regulations with no harmful effect—common policy views in Washington—will almost certainly mean passing the torch of global leadership to other nations, especially China, within a decade or two. This section begins by examining how a series of poor policy choices from the 1980s through the early 2000s cost Europe its leadership in the global pharmaceuticals industry.

The United States turned itself from a global biopharmaceutical laggard into the innovation leader, helped considerably by harmful European policies, which U.S. policymakers now appear to want to copy.

Learning From Europe’s Loss of Pharmaceuticals Industry Leadership

Beyond the U.S. semiconductor industry, U.S. policymakers also can look to Europe’s experience to see what happens to an industry when a supportive policy environment for innovation isn’t maintained and harmful policies are put in place. Europe’s introduction of intensive drug price controls, heavy-handed drug price negotiation tactics, regulations limiting biotechnology research, and limitations on mergers all played roles in undermining the competitiveness of Europe’s biopharmaceuticals sector and helping set the table for the United States to wrest global leadership.

Europe was once the world’s pharmaceuticals industry leader. Between 1960 and 1965, European companies invented 65 percent of the world’s new drugs, and in the latter half of the 1970s, European-headquartered enterprises introduced more than twice as many new drugs to the world as did U.S.-headquartered enterprises (149 to 66).[73] In fact, throughout the 1980s, fewer than 10 percent of new drugs were introduced first in the United States.[74] (See figure 10.)

Figure 10: U.S. share of new active substances launched on the world market, 1982–2019[75]

Chart, bar chart, histogram

Description automatically generated

And, as recently as 1990, the industry invested 50 percent more in Europe than in the United States.[76] As Shanker Singham of the Institute of Economic Affairs notes, “Europe was the unquestioned center of biopharmaceutical research and development for centuries, challenged only by Japan in the post-war period.”[77] As of 1990, European and U.S. companies each held about a one-third share of the global drug market.

But leadership began to shift in the 1990s. By 2004, Europe’s share would fall to 18 percent, while the U.S. share jumped to an astounding 62 percent.[78] From 1990 to 2017, pharmaceutical R&D investment in the United States increased almost twice as fast as in Europe.[79] In fact, from the early 1970s to the mid-1990s, biopharma R&D from America’s top firms went from about one-half of European firm levels to over three times more.[80] As Nathalie Moll of the European Federation of Pharmaceutical Industries and Associations (EFPIA) wrote in January 2020:

The sobering reality is that Europe has lost its place as the world’s leading driver of medical innovation. Today, 47 percent of global new treatments are of U.S. origin compared to just 25 percent emanating from Europe (2014–2018). It represents a complete reversal of the situation just 25 years ago.”[81]

By 2014, nearly 60 percent of new drugs launched in the world were first introduced in the United States, an indication both that more were being invented in the United States and that drug companies from Europe and elsewhere were introducing new drugs in America first because that’s where they could recoup their investments.

This dramatic shift away from Europe serving as the “world’s medicine cabinet” did not happen principally due to deficient corporate strategy or management. Instead, poor public policy in Europe and superior policy in the United States made the difference. This was particularly the case when it came to drug price controls. As one report explained in 2002, “the heart of pharma’s problem in Europe is the market’s inability to ‘liberate the value’ from its products.”[82] This was a reference to the “complex maze of government-enforced pricing and reimbursement controls” that “depressed pharma prices to the point where some companies now believe it is just not economical to launch new products in certain European countries.”[83] Starting in the 1980s, many European nations began to introduce drug price controls, including a combination of international (and even regional) reference-pricing regimes, global prescribing budgets (under which provider organizations are at risk of medical spending above a predetermined budget), profit controls (which set an upper limit on the amount an insurer could pay for groups of identical or equivalent drugs), and restrictions on the use of more-expensive drugs to their use only at hospitals, among many other types.[84] Today there are: “fixed reimbursement prices in France; set reference prices in Germany; and profit limits in the United Kingdom.”[85] As one 2006 article noted, policymakers in many European countries supported such drug price controls to meet: “stated pharmaceutical policy goals to keep pharmaceutical price increases at or below the general rate of consumer price inflation” (despite the fact that “economic efficiency could easily justify real pharmaceutical price increases because pharmaceutical demand rises more than proportionately with income”).[86]

European countries’ extensive use of drug price controls really began in earnest in the early 1980s and accelerated in the 1990s. For instance, as one 2003 report explained, “For the aim of fiscal consolidation, price-freeze and price-cut measures have been frequently used [in European nations] in the 1980s and 1990s.”[87] As that report elaborated, “in the 1970s, most European countries financed medicines indiscriminately,” but “starting in the 1980s, positive or negative lists were introduced” (these being lists defining the drugs eligible for reimbursement).[88] By the late 1980s, manufacturers were free to set prices in only three European countries: Germany, Denmark, and the Netherlands.[89] By the 1990s, virtually all European countries would have various drug price controls schemes in place.[90] As Arthur Daemmrich wrote for the Harvard Business School, “Whereas [U.S] safety and efficacy regulation were seen as causes for the industry’s decline in the 1970s, its subsequent turnaround has been attributed largely to price control policies in Europe and their absence in the United States.”[91]

This dramatic shift away from Europe serving as the “world’s medicine cabinet” did not happen principally due to deficient corporate strategy or management. Instead, poor public policy in Europe and superior policy in the United States made the difference.

By imposing such draconian drug price controls, European regulators severely disrupted the economics of innovation in the European life-sciences industry. As EFPIA explained in a 2000 report, “Many European countries have driven prices so low that many new drugs can no longer recoup their development costs before patents expire.”[92] As the report continues, “These policies, most of which seek only short-run gains, seriously disrupt the functioning of the market and sap the industry’s ability to compete in the long-run.”

Some European policymakers were aware that this could harm innovation and attempted to put in place provisions to limit the damage. At least one country, Germany, established its drug price control system in a way that was intended to avoid limiting the development and use of innovative drugs. But in reality, it did not work that way. As a 2006 commentary in Nature Biotechnology noted, “In theory, innovative drugs should be excluded from the mechanism, but in the past, more and more patent-protected drugs were included as they were dubbed ‘pseudo-innovative’ by the system’s oversight bodies.”[93]

As industry analyst Neil Turner wrote in 1999, those policies set “in motion a cycle of under-investment and loss of competitiveness that’s very difficult to break out of.”[94] As Turner observed, of the new European pharmaceutical products with a significant rollout in 2001, relatively few achieved consistent price premiums across Europe, and that disrupted the innovation process, because “leading industry contenders need between two and four major new product launches a year to deliver the stock market's historic expectations of 10 percent annual sales growth.”[95] However, it’s important to note that Europe’s price controls weren’t applied just on the innovative blockbuster drugs but also to follow-on drugs that provided subsequent improvements. As one European firm’s senior pricing and reimbursement executive explained in 2002:

Pharmaceutical innovation is an organic process. Progress doesn't come in big leaps; it comes from incremental improvements. As long as the authorities refuse to accept that an incremental improvement deserves some price advantage, Europe will not be at the forefront of promoting progress in the pharmaceutical business.[96]

European regulators also surreptitiously delayed the introduction of new drugs (as a way to control costs) through protracted price negotiations. One analyst suggested:

now that marketing authorization is largely harmonized across Europe, such negotiations are the new preferred delay tactic of national authorities” and “a reflection of reimbursement authorities’ growing confidence in using their strong negotiating positions to drive prices down even further.[97]

It certainly wouldn’t be surprising to see such tactics used in the United States after the introduction of drug price controls as envisioned in pending Build Back Better legislation.

While Europe’s drug price controls led to lower drug prices and charges that Europe “free rides” off U.S. biopharmaceutical innovation, one 2004 report noted “Europe’s free ride is not free” and showed that Europe’s drug price controls lead to considerable “social and economic costs in Europe, in the form of delayed access to drugs, poorer health outcomes, decreased investment in research capabilities, and a drain placed on high-value pharmaceutical jobs.”[98]

Indeed, European drug price controls had a very significant impact on reducing pharmaceutical companies’ R&D investments and, therefore, innovation. This explains why the health economists Joseph Golec and John Vernon found that European drug price controls contributed to EU pharmaceutical firms investing less on R&D.[99] While European-headquartered drug companies out-invested U.S.-headquartered ones by about 24 percent in 1986, by 2004, the U.S. companies were outinvesting the European ones by about 15 percent. Overall, Golec and Vernon estimated that EU price controls from 1986 to 2004 shaved about 20 percent off European-headquartered pharmaceutical firms’ R&D levels, resulting in 46 fewer new medicines (and 32,000 fewer R&D jobs) than would otherwise have been the case over that period, and that, if European drug price control policies continued going forward (beyond 2004), this would result in European companies inventing 526 fewer new medicines.[100]

A similar study by Brouwers et al. found that drug price levels within OECD countries would have been 35 to 45 percent higher in the absence of price regulation, and that these higher prices would have triggered additional annual R&D investments of $17 to $22 billion, which in turn would have resulted in 10 to 13 new drug introductions per year.[101]

Moreover, beyond forestalling the innovation of new drugs, drug price controls also contributed to delayed or limited introduction of innovative new drugs in European markets, which has considerable health consequences. As Darius Lakdawalla et al. have elaborated, “if lower spending leads to less innovation for future Europeans, there may be downstream costs borne by Europeans themselves.”[102] Research his team conducted in 2008 found that:

European policies that impose further price-tightening, by lowering manufacturer prices by 20%, would cost about $30,000 in per capita value to American near-elderly cohorts alive in 2060, and $25,000 to similarly aged Europeans in that year.[103]

Their research found that “reductions in EU prices would lower life expectancy in the 55-59 year-old EU and U.S. cohort by about one-tenth of a year” and that since “per revenues have cumulative effects on forgone innovations, the effects on longevity accumulate in a similar fashion,” such that “for the 2050 and 2060 cohorts, the reduction in longevity more than triples from the original effect, to range between 0.3 and 0.4 years of life.”[104] (See figure 11.)

Figure 11: Effect of EU price regulation on longevity among 55- to 59-year-olds in the United States and Europe, by years of life[105]

Chart, waterfall chart

Description automatically generated

Moreover, the loss of industry was real. As Golec and Vernon noted, “The growing gap between EU and U.S. pharmaceutical R&D, and the movement of R&D facilities to the U.S. by EU firms, should be a signal to EU policymakers that low pharmaceutical prices through regulation has costs.”[106] Indeed, Europe’s excessive price controls contributed to some European firms, such as Novartis, moving their entire R&D headquarters to the United States. Elsewhere, in 2003, after German President Angela Merkel introduced new drug controls, Merck cancelled plans to open a research center in Munich, while Pfizer moved much of its European research base to the United Kingdom. As Bain notes, this process accretes, as once R&D starts to leave a region the entire ecosystem departs, including “R&D suppliers and the equipment and technology suppliers that provide pharmaceutical companies with basic chemistry, diagnostic equipment, and tools.”[107] Explaining industry’s move out of Germany, Nikolaus Schwikert, CEO of the specialty chemical and pharmaceutical firm Altana, said: “Our system, which considers the pharmaceuticals industry and its innovations solely as a cost factor and not as a use factor … is the basic problem.”[108] As the United States has found with the semiconductor (and many other manufacturing industries) once the industrial commons supporting the industry leaves U.S. shores, it’s very difficult to reconstitute it.

Europe’s loss of pharmaceutical leadership should serve as a cautionary tale for U.S. policymakers who are running headlong to adopt the very same policies that felled the European industry.

However, drug price controls weren’t the only factor contributing to Europe’s loss of biopharmaceuticals leadership. Another was restrictions on direct-to-consumer advertising (common in the United States), which diminished European pharmaceutical firms’ efforts to demonstrate the cost-effectiveness of their medicines to patients and regulators alike.[109]

The United States’ “innovation-principle”-focused regulation, compared to Europe’s “precautionary principle” regulation, also played a role. As Turner argued, in the United States in the 1990s:

the industry benefitted from a climate in which government and industry were pulling in the same direction” with “unnecessary regulation being kept out of new drug discovery programs and, where it existed, legislation being designed to facilitate—rather than impede—technological progress.[110]

In contrast, in Europe, “pharmaceutical companies are directly affected by the constraints that EU biotechnology legislation has imposed on the already highly regulated industry.”[111]

In addition, restrictive merger policies in Europe also played a role in deterring needed industry consolidation, which especially mattered as the costs involved in new drugs continued to increase and made innovation more difficult for mid-sized firms that lacked scale. That’s ironic because, as ITIF’s Aurelien Portuese has noted, “the first modern pharmaceutical companies were European because they reached a sufficient size,” a lesson that European regulators appeared to forget.[112] For instance, even the 2006 merger of Schering and Bayer “was greeted with skepticism” by regulators, though analysts noted that the two mid-size German pharmaceutical firms merging would only create the world’s 12th-largest drug company.[113]

Finally, Daemmrich argued that “how countries resolve tensions between protecting patients and empowering consumers impacts the international competitive standing of their domestic pharmaceutical industries.”[114] In other words, he argues that differences in regulatory cultures—notably, responses to a new disease, boundaries to compassionate use, and attention to biomarkers and other aspects of consumer-oriented drug development—provide an important explanatory dimension to nations’ relative levels of life-sciences competitiveness. He suggests that U.S. regulatory and clinical trial approaches, especially establishing strict boundaries between testing and marketing has “allowed for greater access to new medicines” than in European countries such as Germany, where “the medical profession exercised a near-monopoly over constructions of ‘the patient’ and drug laws codified existing power-sharing arrangements.” In essence, he contended that “the predictability of centralized regulation based on a tight regime of quantified clinical trials in the United States coupled to the emergence of a focus on consumers and their access to drugs ultimately benefited firms operating in that country over their German counterparts.”[115]

A forthcoming ITIF report will delve more comprehensively into the inferior policy choices both Europe and Japan made from the 1980s to 2000s to undermine the competitiveness of their pharmaceutical industries and set the table for U.S. biopharmaceutical leadership through the introduction of a much more-effective suite of supportive policies. But even this brief overview of Europe’s loss of pharmaceutical leadership should serve as a cautionary tale for U.S. policymakers who are running headlong to adopt the very same policies that felled the European industry.

The Competitive State of the U.S. Biopharmaceutical Industry

One reason we know U.S. leadership in biopharmaceutical innovation is no sure thing is because, as the previous section explained, at least until the late 1980s the United States was at best a global “also ran” in biopharmaceutical innovation behind Europe. But as Europe introduced a variety of policies that hamstrung its industry, it set the table for the United States, to wrest leadership. The United States would do so with robust and complementary public and private investment in biomedical R&D; supportive incentives, including tax policies, to encourage biomedical investment; robust IP rights and effective policies to support biomedical technology transfer, development, and commercialization; an effective regulatory and drug-approval system that was also responsive to patients’ rights groups and focused more on patients than doctors; and finally, a drug-pricing system that allows innovators to earn sufficient revenues for continued investment into future generations of biomedical innovation.

Those policies, and the demise of the European sector, set the stage for the United States to become the global leader on several measures of biopharmaceutical innovation, particularly with regard to biopharmaceutical R&D funding and performance and the introduction of new-to-the-world medicines. But as the following sections will show, the now-deteriorating U.S. policy environment for biopharmaceuticals innovation is evincing increasing signals of concern in the sector.

Biopharmaceutical R&D Funding and Performance

The United States has become the world’s largest global funder of biomedical R&D investment, its share of global R&D as high as 70 to 80 percent over the past two decades.[116] In 2019, the U.S. pharmaceutical industry invested $83 billion dollars in R&D; adjusted for inflation, that amount is about 10 times what the industry invested per year in the 1980s. As the Congressional Budget Office (CBO) writes, “[U.S.] pharmaceutical companies have devoted a growing share of their net revenues to R&D activities, averaging about 19 percent over the past two decades,” with the industry’s R&D intensity exceeding 25 percent in 2018 and 2019.[117] And not only does the U.S. biopharmaceutical industry invest more than double the average OECD nation’s biopharmaceutical industry does (12 percent), it invests about eight to ten times the level of the average U.S. industry, “with R&D intensity across all [U.S.] industries typically ranging between 2 percent and 3 percent.”[118] The sector accounts for almost 17 percent of U.S. business R&D performance and nearly one-quarter of the industry’s workforce labors at the R&D bench.[119] Moreover, almost one-third of global biopharmaceutical R&D activity occurs within the United States.

New Drugs

While biopharmaceutical R&D, scientific publications, and patents represent staring points, the acid test of nations’ and enterprises’ investments is whether they translate into new-to-the-world drugs. On this score, the United States excels, and its lead over Europe and Japan is growing. From 2004 to 2018, U.S.-headquartered enterprises produced almost twice as many new chemical or biological entities (NCEs and NBEs) as did European ones, and three to four times as many as Japan. (See Table 1.) However, at least in percentage terms, new drugs from other nations, such as China, have been growing even faster (albeit from a smaller base).

Table 1: Number of new chemical or biological entities[120]