ITIF Logo
ITIF Search

A New Frontier: Sustaining U.S. High-Performance Computing Leadership in an Exascale Era

A New Frontier: Sustaining U.S. High-Performance Computing Leadership in an Exascale Era
September 12, 2022

Continued leadership in high-performance computing (HPC) as it enters the exascale era remains a key pillar of U.S. industrial competitiveness, economic power, and national security readiness. Policymakers need to sustain investments in HPC applications, infrastructure, and skills to keep America at the leading edge.

KEY TAKEAWAYS

HPC represents an essential strategic national capability, and global HPC leadership depends on staying at the cutting edge of both HPC systems development as well as their application and use.
The advent of exascale supercomputing opens doorways for researchers from a variety of fields to explore physical phenomena at a scale and level of resolution, detail, fidelity, and confidence that heretofore was scarcely imaginable.
Competence in HPC is increasingly important to industrial competitiveness, underpinning research and development (R&D) and innovation in a range of sectors from aerospace and biotechnology to consumer packaged goods and clean energy.
Given the critical importance of supercomputing to countries’ economic and national security, many nations and regions are competing fiercely for supercomputing leadership.
In 2015, the United States had nearly twice as many of the world’s top 500 supercomputers as China. But China has flipped the script, now reporting 173 (which is even likely an undercount) versus 128 for the United States.
To keep America at the leading edge, policymakers must leverage HPC-related funding and programs in the CHIPS and Science Act, expand its STEM (science, technology, engineering, and math) pipeline, and democratize access to HPC computing resources.

Key Takeaways

Contents

Key Takeaways 1

Introduction. 3

What Is High-Performance Computing? 3

Why Does High-Performance Computing Matter? 6

Unlocking New Pathways to Scientific and Technology Discovery 6

HPC Represents the “Tip of the Spear” in Advanced Computing. 8

Maximizing the Potential of AI/ML/DL. 9

Economic Impact of Supercomputing. 10

Why Does National Leadership in HPC Matter? 12

International Supercomputing Leadership. 13

Next-Generation Commercial Applications of HPC. 16

HPC Enabling Aerospace Innovation. 16

HPC Enabling Automotive Innovation and Mobility Solutions 20

HPC Enabling Consumer Packaged Goods Innovation. 21

HPC Enabling U.S. Life Sciences Innovation. 22

Clean Energy Innovation. 27

Defense and Environment-Oriented Applications of HPC. 29

Policy Recommendations 32

Conclusion. 34

Endnotes 35

 


Introduction

High-performance computing (HPC) refers to supercomputers that, through a combination of processing capability and storage capacity, can rapidly solve difficult computational problems across a diverse range of scientific, engineering, and business fields.[1] HPC represents a strategic, game-changing technology with tremendous economic competitiveness, science leadership, and national security implications. Because HPC stands at the forefront of scientific discovery and commercial innovation, it is positioned at the frontier of competition—for nations and their enterprises alike—making U.S. strength in producing and adopting HPC instrumental to its industrial competitiveness and national security capability.

On May 30, 2022, the world entered the exascale computing era with the launch of Frontier, a supercomputer at the Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) capable of executing one quintillion floating point operations per second (FLOPS). The advent of exascale computing will unlock a wealth of heretofore scarcely imaginable research opportunities across a variety of scientific, technical, and engineering fields that scientists are only beginning to even scratch the surface of exploring. However, while exascale computing certainly represents a game changer, the proliferation of a greater number of ever-more capable supercomputers is helping researchers in fields from aerospace, astronomy, biology, particle physics, seismology, and weather to many others achieve breakthroughs in the modeling and simulation (M&S) of complex biological, chemical, and physical systems, deepening scientific understanding and unleashing new innovations. For American industry, leadership in HPC application is integral to research and product development, time to market, cost avoidance, and achieving energy efficiency in manufacturing processes, making facility with HPC a key mechanism for achieving comparative advantage and going to market with differentiated and unique value propositions. In short, HPC represents a key capability from both an economic and national security perspective, and policymakers must continue to sustain investments and build ecosystems to ensure America leads the world in this critical technology.

This report begins by explaining what HPC is and examining why HPC itself and national leadership therein matters. It then assesses the state of global HPC leadership before turning to explore a range of cutting-edge HPC applications across industrial, national security, and mission-oriented domains. It concludes by providing policy recommendations to ensure America remains the world’s leader in developing HPC systems and applications.

What Is High-Performance Computing?

HPC refers to the application of supercomputers—the world’s fastest, largest, most-powerful computer systems—alongside sophisticated models and large datasets to study and solve complex scientific, engineering, and technological challenges, especially those requiring the understanding, modeling, and simulation of complex, multivariate physical systems.[2] HPC unites several technologies, including computer architecture, programs and electronics, algorithms, and application software, under a single platform to solve advanced, sometimes heretofore intractable problems quickly and effectively.

The world’s leading supercomputers are measured by the number of FLOPS they can calculate, and their capacity has increased enormously over the past 30 years. In 1993, the world’s fastest computer was capable of executing 124 billion, or more than 109 FLOPS, also known as “gigaflops.” The speed of the world’s fastest supercomputer steadily increased in the ensuing decades, crossing the teraflop, or 1012 FLOPS, threshold in 1997 and the petaflop, or 1015 FLOPS, barrier in 2008. (See figure 1.) Fourteen years later, with the launch of the Frontier supercomputer at ORNL on May 30, 2022, the United States became the first nation to publicly field an exascale supercomputer (China may have had two supercomputers cross this threshold in 2021, though these weren’t formally submitted to the top 500 list), one capable of executing 1018, or one quintillion (that is one million trillion), FLOPS per second (i.e., an “exaflop”).[3] (See figure 2.)

Figure 1: Speeds of world’s fastest supercomputers, 1993–2022[4]

image

Frontier, with 8.7 million cores and a rated speed of 1.102 exaflops, surpassed Japan’s then-world-leading supercomputer Supercomputer Fukagu (Japanese for Mount Fuji) which, with 7.6 million cores, was rated at 537 petaflops.[5] (A core generally refers to a single independent execution unit that can fetch instructions and execute them one by one simultaneously.) Frontier occupies a space of more than 4,000 square feet and includes 90 miles of cable and 74 cabinets, each weighing 8,000 pounds.[6] The United States plans to bring a second exascale-capable computer, Aurora, online at the Argonne National Laboratory later in 2022, with performance levels that may exceed two exaflops.[7] In 2023, the United States will bring online a third exascale-capable computer, El Capitan, expected to operate at 1.5 exaflops, at the Lawrence Livermore National Laboratory (LLNL), with a foremost mission of helping manage the nation’s nuclear arsenal.[8]

Figure 2: Frontier supercomputer, Oak Ridge National Laboratory[9]

image

It’s critical to understand that each step change in computer processing speeds—from gigaflops to teraflops to petaflops to exaflops—represents a 1,000-fold increase in peak computing speeds: that is, an increase in three “orders of magnitude” (an “order of magnitude” generally being understood as an increase in something by a factor of 10). Thus, an exascale supercomputer is 1,000 times faster and more powerful than a petascale computer. (See figure 3.) All told, the performance of the world’s fastest supercomputer has increased almost 70,000-fold over the past 20 years.[10]

Figure 3: Conceptualizing the growth in supercomputer processing speeds[11]

image

Why Does High-Performance Computing Matter?

Conceptually, HPC matters greatly to countries’ economic and national security for several key reasons, including that it 1) unlocks new avenues and pathways toward scientific and technological discovery; 2) represents “the spear” of more capable computing architectures and technologies that ultimately make one’s laptop, tablet, or smartphone more capable; 3) will be a vital enabler of unlocking the enormous promise of big data and artificial intelligence/machine learning/deep learning (AI/ML/DL) to facilitate discovery and innovation; and 4) produces significant economic benefits, both directly and indirectly. And while some of the following section is framed with regard to potential new opportunities realizable now that exascale-era computing has been achieved, it’s imperative to note that there as yet remains only one exascale supercomputer, so all the following discussed benefits apply broadly to HPC, whether supercomputers operate in the petaflop or exaflop range.

The performance of the world’s fastest supercomputer has increased almost 70,000-fold over the past 20 years.

Unlocking New Pathways to Scientific and Technology Discovery

As noted, an exascale-capable computer will be 1,000 times faster and more powerful than only a petascale-capable computer. That matters, for as the Dutch computer scientist Edsger Dijkstra explained, “A quantitative difference is also a qualitative difference, if the quantitative difference is greater than an order of magnitude.”[12] Thus, one key reason why the achievement of exascale-capable supercomputers matters is because for every order of magnitude increase in computing capability, one enjoys a qualitative increase in what one can achieve with that computing power. In other words, the types of applications one can run on exascale platforms are fundamentally different from the types of applications one can run on petascale platforms.[13] At one quintillion operations per second, exascale computers will be able to “more realistically simulate the processes involved in scientific discovery and national security such as precision medicine, microclimates, additive manufacturing, the crystalline structure of atoms, the functions of human cells and organs, and even the fundamental forces of the universe.”[14]

As Rick Arthur, senior director for advanced computational methods research at GE Research, framed it:

A researcher can only model a universe that will fit within the size of the largest computer they can access. Therefore, top “leadership class” computers set the threshold for what phenomena can and cannot be perceived, studied, and understood. That is, if the data or model of the subject of interest surpass what can stored or feasibly processed on your computer, insight is beyond your reach. Exascale computer systems greatly expand the universe of what can be modeled by scientists and engineers, so they can achieve never-before realism in physics-based and data-derived models; with much greater completeness, accuracy, and fidelity in scale, and scope, and the ability to more confidently assess sensitivities from the inputs and confidence boundaries on the outputs.[15]

In other words, exascale-era computing will open new doorways to solving complex, multivariate scientific, technological, and engineering challenges, especially those requiring data-intensive, M&S-driven solutions, allowing such phenomenon to be understood at levels of resolution, granularity, and detail never before possible. Supercomputers (in general, and exascale in particular) will enable the construction of higher-fidelity multiphysics models that can mathematically describe the real-time interplay of diverse physical phenomena and variables within a system, facilitating the simulation of complex interactions and the potential to isolate the relative effects of each variable affecting the system.[16] While it’s not an exact analogy, computing at 1018 speeds will enable scientists to investigate complex physical systems at an ever-greater 10x resolution, dimension, or time period (in each case, scaled either up or down), and at much faster speeds, whether it comes to modeling the behavior of nine billion individual atoms the instant of an atomic explosion, the action of each cell in a beating human heart, the movement of electricity through a smart grid, or the crystalline structure of atoms or molecules.

As Arthur explained it, “Computational tools like HPC fundamentally represent a scientific instrument” just like a microscope (which allows one to interrogate systems in extreme detail) or a macroscope (like a telescope, which allows researchers to perceive system-wide interactions and explore a vast dimensionality.)[17] Scientists investigate physical phenomena and systems across a wide range of physical scales (i.e., different sizes or lengths), temporal scales, and spatial scales (e.g., the extent of an era or region over which a phenomena occurs). This matters tremendously because the behavior of physical phenomena can have different dynamics at very different scales (e.g., the nanoscale) as compared with large scales (e.g., the macroscale). Supercomputers can model behavior in a system at tens of thousands to hundreds of thousands times greater resolution (i.e., level of detail) than other computers. For instance, understanding nanoscale physics—such as the behavior of fluids moving across membrane pores—involves modeling interactions that occur at 100 nanometers (nm) or less in length. Conversely, to study the impacts of such interactions at the macroscale (visible to the naked eye) requires scaling the interactions by four orders of magnitude, or 10,000 times, to 1,000,000 nm, or about the size of a grain of sand.[18] (See figure 4.)

Figure 4: Conceptualizing scale lengths by orders of magnitude[19]

image

A particular reason why this matters for U.S industrial competitiveness, as a report from DOE’s Office of Energy Efficiency and Renewable Energy (EERE) explains, is because, in manufacturing, “The more easily achievable progress has been made; the so-called ‘low hanging fruit’ have been picked.”[20] In other words, opportunities for game-changing or competitive advantage-creating industrial innovation will increasingly require companies to reach the “high-hanging fruit,” which HPC is uniquely positioned to facilitate. As the report elaborates:

Problems and opportunities for improved productivity and performance remain, yet next-generation innovations tend to be more complex and carry higher risk. More sophisticated research approaches are needed to identify opportunities. Detailed analysis with HPC can discern cost-effective ways to improve productivity, increase sustainability, and save energy.[21]

Another way supercomputers can unlock innovation is by providing a new dimension to the scientific method. Heretofore, the fundamental steps in the scientific method were 1) research, 2) form a hypothesis, 3) conduct an experiment, and 4) analyze the data and draw a conclusion. But HPC enables the introduction of an entirely new step through its simulation and prediction capabilities. That is, the model of “theory/experiment/analysis” in the sciences or “theory/build a physical prototype/experiment/analyze” in product development is changing to one of “theory/predictive simulation/experiment/analyze.”[22] Thus, HPC-enabled computer simulation becomes a “third pillar” of scientific discovery, complementing traditional theory and experimentation.[23]

Supercomputers open new doorways to solving complex, multivariate scientific, technological, and engineering challenges, especially those requiring data-intensive, M&S-driven solutions, allowing such phenomenon to be understood at levels of resolution, granularity, and detail not before possible.

As DOE EERE explained, “Historically, advancements in manufacturing have relied on repetitive-trial-and-error development and experimentation.”[24] But these research methods have inherent limitations, including that they’re too often expensive, slow, risky, infeasible, or incapable of understanding what’s occurring within complex systems. In contrast, “Computational experiments on a supercomputer can explore complex systems that are difficult to simulate using physical experiments and typical computers.”[25] While the DOE quote above refers to HPC’s use in the manufacturing context, it’s important to note that the principle applies to any domain of scientific inquiry—whether developing the optimal design of a nuclear reactor or a chemical catalyst, or modeling the movement of weather systems or galaxies. Put differently, HPC allows designers to produce designs for products from airplanes to wind turbines to nuclear reactors on computers and facilitate the fabrication of initial prototypes that can be fielded with a more “confirmatory” than “exploratory” expectation regarding their features and performance attributes. This will help accelerate speed to market for a wide range of products designed with the benefit of supercomputers. Lastly, it should be noted that the faster speeds of exascale-era computers—that is, the capability to perform in hours computations that previously took days, or days for calculations that used to take weeks—will not only save time for a given application but also free up HPC resources for a wide variety of additional uses. (Incidentally, information scientists refer to the ability to compute at faster speeds as “strong scaling” and the ability to calculate at finer scales or resolutions as “weak scaling”; HPC enables both.)

HPC Represents the “Tip of the Spear” in Advanced Computing

The demand for ever-faster application-specific logic and memory semiconductors, including graphic processing units (GPUs), accelerators, interconnects, and sophisticated software code positions HPC at the frontier of advanced computing. And just as supercomputers have moved from giga to tera to peta to exascale capabilities, so too have downstream information technology (IT) devices—from servers to personal computers to smartphones—all of which have become ever faster and more capable. (See figure 5.) This is of course a manifestation of Moore’s Law—the notion that the number of transistors on a microchip doubles about every two years—effectively meaning a semiconductor’s capability in terms of speed and processing is doubled even as its cost is halved (also called “process-node scaling”). But the point is that the pursuit of exascale computing has driven innovations in semiconductor and computer architecture design that ultimately propagates across downstream IT platforms all the way to the individual consumer.

Figure 5: The pursuit of exascale helps drives IT innovation[26]

image

Maximizing the Potential of AI/ML/DL

McKinsey analysts estimated that AI—a field of computer science devoted to creating computing systems that perform operations analogous to human learning and decision-making—may deliver additional global economic output reaching $13 trillion by 2030, increasing global gross domestic product (GDP) by about 1.2 percent annually.[27] AI is positioned to deliver such tremendous impact in part because it will help enterprises extract and apply actionable insight and intelligence from data in real time. Two subfields of AI will be particularly important in this regard: ML, a branch of AI focusing on designing algorithms that can automatically and iteratively build analytical models from new data without explicitly programming a solution, and DL, a subfield of ML that structures algorithms in layers to create “neural networks” that can learn.[28]

But if the spreadsheet was the so-called “killer app” for the personal computer, so too may be the marriage of big datasets and AI/ML/DL with supercomputers, with the latter substantially enabling the former. As John Sarraro, deputy director for science, technology, and engineering at the Los Alamos National Laboratory (LANL) explained, “The next generation of exascale computing will enable AI solutions we can barely imagine today.”[29] Indeed, the advent of AI/ML/DL has “created new demands for HPC with its own application-specific technical requirements that include mixed precision and integer math” and an increasing amount of HPC “loads” (i.e., usage) is for AI/ML/Dl applications.[30] In particular, supercomputers are proving transformative in rapidly training algorithms on large datasets—the essence of ML—thus enabling the development of AI tools that can be served across a variety of platforms, from mobile phones to fitness monitors. Siri voice recognition software provides a nice example of an end-user AI-based service, now provided via a cloud service to one’s smartphone, but which was originally developed with the help of supercomputers.

Moreover, not only can HPCs handle large datasets, but they can do so with regard to a wide variety of structured data (i.e., on a spreadsheet) and unstructured data, such as images, video, audio, text, telemetry, temperature, air pressure, etc., much of it arriving from a variety of sensors, machines, satellites, etc. As one report explains, “High performance data analytics requires the special storage and interconnect capabilities of HPCs to effectively process large and diverse data sets that may include voice, text, image, and instrumentation outputs to generate new insight appropriate to a wide range of sectors including medicine, finance, transportation, and manufacturing.”[31] Going forward, supercomputers will be well positioned to consume a diverse variety of data from a wide range of sources and synthesize the information in real time to generate actionable intelligence and insights delivered to users in the field or “on the edge”; in other words, supercomputers will actually be complementary to unlocking the potential of edge computing.[32] Lastly, it should be noted that the AI-HPC relationship also runs in the other direction. That is, complex models and simulations running on HPC machines generates enormous amounts of data that’s often difficult to sort through to find the meaningful “needle in the haystack”; researchers often run smart AI/ML algorithms against those massive new datasets generated by HPC to help unearth novel insights.

If the spreadsheet was the so-called “killer app” for the personal computer, so too may be the marriage of big datasets and AI/ML/DL with supercomputers.

Economic Impact of Supercomputing

Lastly, HPC matters because it produces manifold economic impacts, ranging from the economic value supercomputing creates for the users of HPC in the products they develop to the economic impact generated by the HPC industry’s sales of its products.

Economic Impact Generated Through the Use of Supercomputers

As a report by Earl Joseph et al. at Hyperion Research explains, “While it is difficult to fully measure the value that supercomputers have generated, even looking at just automotives, aircraft, and pharmaceuticals supercomputers have contributed to products valued at more than $100 trillion over the last 25 years.”[33] Hyperion has estimated that the economic value created by the application of Linux system-based supercomputers (which account for virtually all of the world’s top 500 supercomputers) has exceeded $3 trillion over the past 25 years.[34]

In a study of 175 industrial firms, Hyperion found that, on average, the companies realized $452 for every $1 they invested in HPC. (See table 1.) Narrowing the study to enterprises in the finance, life-sciences, manufacturing, and transportation industries, firms realized $504 in “sales revenue” and $38 in “profits or cost savings” for every $1 invested in HPC.[35] Hyperion estimated that those 175 HPC-supported projects created 2,335 new jobs in the companies studied. Such significant impacts from HPC shouldn’t be surprising, as “some large industrial firms have cited savings of $50 billion or more from HPC usage.”[36]

Table 1: Financial return on investment from HPC, by select industry[37]

Industry

Average Revenue per HPC Dollar Invested

Average Profit or Cost Savings per HPC Dollar Invested

Defense

$75.00

$18.80

Financial

$641.70

$47.40

Insurance

$175.70

$280.00

Life Sciences

$205.60

$40.90

Manufacturing

$216.50

$28.40

Oil and Gas

$416.00

$53.70

Telecommunications

$210.70

$30.40

Transportation

$1,804.30

$15.60

TOTAL

$452.10

$37.60


A 2018 report examines the return on investment (ROI) of three research cyberinfrastructure networks: Indiana University's Big Red II supercomputer, the National Science Foundation (NSF)-funded Jetstream cloud system, and the federally funded eXtreme Science and Engineering Discover Environment (XSEDE).[38] Based on the cost of the cyberinfrastructure, the computing core hours the network provided, and the comparable price of those core hours using Amazon Web Services instead, the authors estimated the ROI of Big Red II from 2013 to 2017 to be 2.5–3.7; the one-year ROI of Jetstream to be 1.9–3.1; and the ROI of XSEDE in its most recent project year to be 1.34 (up from 1.17 in the prior project year). In other words, supercomputers generated the most significant economic returns of the assets evaluated in the study.

A 2017 report estimates the regional economic impacts of the National Center for Supercomputing Application’s (NCSA’s) Blue Waters supercomputer at the University of Illinois Urbana-Champaign, finding Blue Waters-related projects added $1.08 billion to the state's economy.[39] The report also estimates that over the project's life (October 2007 to June 2019), it will have created 5,772 full-time-equivalent jobs and supported 1,892 direct and indirect jobs between April 2013 and June 2016. The estimated output multiplier from project expenditures was 1.86 and the estimated employment multiplier was 2.04.[40]

Economic Impact Generated By the HPC Industry

In terms of the industry itself, Hyperion Research has estimated that over $300 billion has been generated from the sales of Linux-based supercomputers. Going forward, Hyperion estimates global sales of Linux-based supercomputers from 2022 to 2026 will generate $90 billion in machine sales and $90 billion in supporting infrastructure.[41] Hyperion estimated the global HPC market at $34.8 billion in 2021 and projects that sales of on-premises HPC servers will increase by 7.9 percent over the next five years, while cloud-based HPC usage will grow by 17.6 percent over that timeframe.[42]

Why Does National Leadership in HPC Matter?

Broadly, the United States remains the leader in both developing HPC systems and deploying them, although that lead has shrunk. Some might ask why it matters that the United States should lead in HPC. Likewise, others might argue that so long as HPC users in the United States—whether enterprises, academic researchers, or government agencies—can get access to the HPC systems they need, it does not matter which enterprises in the world manufacture those machines, so policymakers should be agnostic on the issue. However, such contentions are misguided for a number of reasons.

First, supercomputers represent a vital enabler of U.S. defense capabilities, and especially its nuclear defense posture (as a subsequent section of this report elaborates). In fact, one could substitute nuclear weapons themselves for high-performance computers and ask whether it would be troubling if the United States depended on China or the European Union for its nuclear weapons systems. And if the United States’ relying on other nations to supply its nuclear arsenal sounds like an untenable proposition, then so is the notion of it relying on other nations for the most-sophisticated HPC systems. From a national security perspective, the United States needs assurance of access to the best high-performance computers in the world simply because it gives U.S. defense planners a competitive edge and allows the U.S. defense industrial system to design leading-edge weapons systems and national defense applications faster than anyone else.

Second, the notion that U.S. enterprises would certainly enjoy ready access to the most sophisticated HPC systems for commercial purposes should they be predominantly produced by foreign vendors constitutes an uncertain assumption. If Chinese vendors, for example, dominated globally in the production of next-generation HPC systems, it’s conceivable that the Chinese government could exert pressure on its enterprises to supply those systems first to their own country’s aerospace, automotive, or life-sciences enterprises and industries in order to assist them in gaining competitive advantage in global markets. The notion that U.S. enterprises can rely risk-free on access to the world’s leading HPC systems if they are no longer being developed in the United States amounts to a tenuous expectation that could place broad swaths of downstream HPC-consuming industries in the United States at risk. America’s dependence on Chinese suppliers for personal protective equipment in the opening stages of the COVID-19 pandemic provides a salient warning of the risks of depending on foreign (especially Chinese) suppliers in exigent situations.

Third, and perhaps the most compelling reason why U.S. leadership in HPC matters, is HPC systems are not developed in a vacuum: HPC vendors don’t go off into a room and draw up designs and prototypes for new HPC systems by themselves hoping someone will purchase them later. Rather, HPC vendors often have strong relationships with their customers, who co-design next-generation HPC systems in partnership with them. So-called “lighthouse [or ‘lead’] users”—which, in fact, are government agencies such as DOE or Department of Defense equally as often as leading-edge corporate users—define the types of complex problems they want to leverage HPC systems to solve, and then the architecture of the system (e.g., how the cores will be designed to handle the threads calculating the solutions) is co-created. This ecosystem exists between the HPC vendors and some of the more advanced users in both the commercial and government sectors, and this symbiotic relationship pushes the frontier of HPC systems forward. So when a country has a leadership position in HPC, this enables close collaboration with the end users who buy the machines, and that creates a supply and demand dynamic for systems that are best for U.S. domestic competitiveness.

International Supercomputing Leadership

Given the critical importance of supercomputing to countries’ economic and national security, it’s no surprise that many nations and regions are competing fiercely for supercomputing leadership, whether with regard to producing the world’s fastest supercomputers, the most supercomputers, the most aggregate supercomputing capacity, or the most effective ways to leverage this computational power. Indeed, “supercomputers have long been a flash point in international competition.”[43] As one report pithily observes, “To out-compute is to out-compete.”[44] However, while each of those factors matters, so too does the accessibility, usability, and usefulness of those supercomputing assets. To offer a Cold War analogy, the USSR may have usually fielded the fastest fighter jets, but if they were often in the hangar due to design flaws or missing spare parts, they were of limited value. In other words, it’s not always about the shiniest, fastest, or greatest number of objects, but how functional and useful they truly are.

While the performance capabilities of a nation’s supercomputers certainly matters, what matters even more is the ability of researchers to effectively leverage HPC resources to meaningfully solve real-world scientific, technical, and engineering problems.

As of November 2015, the United States fielded 199 of the world’s top 500 supercomputers, compared with China’s 109; however, by June 2022, China had flipped the script, fielding 173 of the 500 fastest supercomputers to the United States’ 128. (See figure 6.) This represented a 58 percent increase in the number of Chinese supercomputers in the top 500, while the U.S. share slipped just over one-third. (Second- and third-placed Japan’s and Germany’s number of top 500 supercomputers remained steady, Japan falling from 37 to 33 supercomputers in the top 500 and Germany increasing from 31 to 32 over that period, meaning the big shift came in the relative positions between China and the United States.) It should also be noted that China appears to have stopped submitting its newest supercomputers to the top 500 list (with as many as 7 of the top 10 fastest supercomputers now actually being Chinese), potentially in part because China fears the United States might impose further export control restrictions of U.S. chips and chip technologies to China. In fact, two Chinese supercomputers are reported to have each reached exaflop status in 2021 in terms of both theoretical and realized performance—although these supercomputers haven’t been submitted to the top 500 list.[45]

Figure 6: Number of supercomputers in the top 500, by select country[46]

image

But, again, a count of which country has the most supercomputers is somewhat simplistic. When considering Rmax, which measures the cumulative supercomputing power of nations, China’s stands at about 80 percent of the U.S. level. (See figure 7.) China had surpassed the United States on this measure in 2017, but the United States’ introduction of several new supercomputers in the late 2010s, even before now reaching exascale with Frontier, shifted this indicator back in the United States’ favor.

Figure 7: China’s cumulative supercomputing capacity as a share of U.S. total, 2010–2020[47]

image

Moreover, China still struggles with maximizing the potential impact of its supercomputers. While the country has shown it can build massively parallel, fast supercomputers, it lags behind at developing innovative software applications that can leverage these supercomputers to generate new insights and discoveries across a wide range of fields. As HPCWire’s Tiffany Trader put it, “China’s challenge has been a dearth of application software experience.”[48] For example, China’s Tianhe-2 supercomputer “is reportedly difficult to use due to anemic software and high operating costs [including] electricity consumption that runs up to $100,000 per day.”[49] In short, China’s HPC approach thus far appears to have emphasized performance speeds over practical applications, meaning the functionality of its machines lag behind those in Europe and the United States.[50]

In contrast, the United States has concentrated heavily on and indeed has invested billions in developing software and applications that can run effectively on the nation’s supercomputing infrastructure, in addition to investing in training researchers who can take advantage of these tools. For instance, the Extreme Scale Scientific Software Stack (E4S) seeks to demystify complicated HPC hardware and software, lowering the barrier to entry for others to us HPC. It represents a community effort to provide open source software packages for developing, deploying, and running scientific applications on HPC platforms, providing from-source builds and containers for a broad collection of HPC software packages.[51] Similarly, the NCSA at the University of Illinois Urbana-Champaign represents a center of advanced cyberinfrastructure and expertise that provides a hub for transdisciplinary research that unites academic institutions and global companies in search of the answers to the world’s most challenging problems.[52] In other words, its focus is on democratizing HPC access for U.S. academic researchers and businesses large and small alike. According to Bob Sorensen, senior vice president of research for Hyperion Research, “Where the United States has clearly excelled [compared with other nations] in HPC is concentrating on access to and the usability of its national HPC infrastructure so that HPC can be effectively deployed to solve academic and industrial research challenges.”[53]

Figure 8: On-premises revenues of HPC server vendors, by country of headquarters, 2021[54]

image

In terms of which nations’ enterprises lead in selling on-premises HPC server systems, the United States clearly leads. In 2021, U.S.-headquartered enterprises commanded 61.6 percent of the on-premises HPC market, followed by Chinese companies with an 18.3 percent share, and French and Japanese ones with 3.7 and 2.4 percent, respectively.[55] (See figure 8.) In 2021, North America accounted for 42 percent of global HPC server consumption, Europe 28 percent, Asia outside Japan (though largely China) 22 percent, and Japan 6 percent.[56]

Next-Generation Commercial Applications of HPC

Exascale-era HPC is unlocking breakthrough innovation across numerous U.S. industries, from aerospace and automotive to consumer packaged goods (CPG), energy, and life-sciences sectors (among many others). For U.S. industry, HPC applications accelerate R&D activities, make entirely new product designs or structures possible, speed time to market, decrease costs, enhance energy efficiency, and transform go-to-market business models. This section highlights some of the latest applications of HPC driving U.S. industrial competitiveness forward.

HPC Enabling Aerospace Innovation

HPC is dramatically transforming both aircraft and jet engine design and innovation.

Aircraft Design and Manufacturing

HPC has fundamentally transformed how companies such as Airbus and Boeing design and manufacture aircraft, with HPC being applied to determine the aerodynamic performance of entire airplanes, including virtually every surface on an aircraft; the optimum structural design of every aircraft component, from bulwarks to wheels, in order to minimize weight; and, in the military domain, even the radar cross section of stealthy platforms. For Boeing, HPC enables faster solutions to more complex problems, more accurate results with improved performance, enhanced safety and environmental acceptability of products, quicker development timelines and thus swifter time to market, and lower overall development costs.[57]

As Jim Glidewell, a senior HPC analyst at Boeing, explained, Boeing deploys HPC for two principle reasons: achieving cost savings and generating positive ROI.[58] With regard to the first, high-fidelity simulation can allow significant reduction in the number of wind tunnel tests required in aircraft development, which matters when each such test can cost $10 million or more. For instance, Boeing physically tested 77 prototype wing designs for its 767 aircraft (which was designed in the 1980s), but for its 787 Dreamliner, only 11 wing designs were physically tested in a high-speed environment (a sevenfold reduction in the needed amount of physical prototyping), primarily because over 800,000 hours of supercomputer simulations had drastically reduced the need for physical prototyping.[59] For the next generation of commercial airplanes, completing a great deal of simulation work virtually could mean as few as three to four wind tunnel tests will be needed. On the ROI side of the equation, even very small improvements in fuel economy can result in tremendous operational efficiencies. For instance, reducing a plane’s drag by even a single-digit percentage can result in fuel savings of millions of dollars over a plane’s service life, with one airline estimating that reducing one pound of an aircraft’s weight can save 53,000 liters of fuel annually, adding up to tens of thousands of dollars in savings, in addition to delivering significant environmental benefits.[60] HPC is also deployed so that more configuration deviations can be assessed earlier in the design process, improving overall safety.

Perhaps the most intricate component of an aircraft is its wing—which is fundamentally what gives an aircraft lift and thus enables flight—and supercomputing has exerted a tremendous impact on wing design. Consider the Boeing 787 Dreamliner, the first large commercial transport aircraft with a fully composite wing—that is, one built from carbon fiber as opposed to aluminum. Joris Poort (then a Boeing engineer, now CEO of enterprise software firm Rescale) has explained that with aluminum, “there is a simple choice for the thickness of a panel.… with carbon fibre every single layer will have over 50 different angles layered on top of each other, which will be cooked in an oven or autoclave.… the number of variables [we had to consider] went from thousands to tens of millions.”[61] To design the lightest, lift-maximizing wing possible for the 787 (with the appropriate structural requirements), Boeing evaluated over 50 million variables using supercomputers. As Poort noted, HPC helped Boeing save “over 115 pounds on the wing design [on the Dreamliner, which was worth] about $180m [at the time].”[62]

Mastering the intricacies of computational fluid dynamics (CFD), a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows, represents one of the most significant challenges to realizing the next generation of aircraft design. As Joerg Gablonsky, a Boeing technical fellow and chair of the HPC Enterprise Council, explained, “Exascale technology is critical for Boeing to design our current and future products; and the next generation of HPC will expand the areas of the flight envelope we can effectively and accurately simulate through CFD analysis.”[63]

HPC will enable more accurate representation of aircraft performance across the entire flight envelope, improve and speed up product design, and accelerate time to market while reducing costs.

To be sure, CFD has long been integral to aircraft design, impacting wing, wing tip, and vertical tail design; fuselage and cabin design; and engine inlet and exhaust system design, among other aircraft attributes. But as a NASA report, “CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences,” explains, “In spite of considerable successes, reliable use of CFD has remained confined to a small but important region of the operating design space due to the inability of current methods to reliably predict turbulent-separated flows.”[64] What this essentially means is that CFD has been calibrated only in relatively small regions of a commercial aircraft’s operating envelope where the external air flow is well modeled by current methods (e.g., in cruise conditions with stable flight at altitude), and there is opportunity to expand CFD to the edges of the flight envelope, such as in flaps-down situations where there exists unsteady, turbulent air flow. As the NASA report explains, one grand challenge is therefore leveraging HPC for CFD “to simulate the flow about a complete aircraft geometry at the critical corners of the flight envelope including low-speed approach and takeoff conditions, transonic buffet, and possibly undergoing dynamic maneuvers, where aerodynamic performance is highly dependent on the prediction of turbulent flow phenomena.”[65] Further, going forward, as exascale-era HPC-powered M&S becomes more capable, CFD analysis will expand to additional applications including reducing noise (both in cabin and externally from the aircraft), control failure analysis, and designing wing and edge controls.[66] Moreover, as Gablonsky noted, CFD simulations that once took a month can now be performed in days or even hours on exascale-capable machines. In summary, HPC helps enable a more-accurate representation of aircraft performance across the entire flight envelope, improve and speed up product design, and accelerate time to market while reducing costs.

One other notable area in which supercomputing is impacting aircraft design is noise abatement. Aircraft and engine noise propagates from a small space to miles in all directions, at varying frequencies, making proper modeling of the acoustical properties of sound waves a daunting challenge. Heretofore, governments required laboratory or flight tests to validate the acceptability of airplane configurations to satisfy community noise requirements. But exascale-powered M&S holds the promise that this can be accomplished by computer.[67] However, as Gablonsky explained:

To simulate noise requires much larger meshes (meshing refers to defining continuous geometric shapes [e.g., 3D models] using simplified 1D, 2D, and 3D shapes) than we currently use, and which must be run in a time accurate manner. Right now it takes weeks to get a few seconds of time data for simplified geometries, making it impractical for design. Exascale technologies will enable us to do these types of simulations efficiently, and in timeframes where we can incorporate design decisions earlier into the aircraft design cycle.[68]

HPC is also playing a key role in military aircraft design. In September 2020, the United States Air Force (USAF) introduced an “e”-series aircraft designation, with USAF secretary Barbara Barrett explaining that USAF created the nomenclature to “inspire companies to embrace the possibilities presented by digital engineering.” USAF also noted, “An eSeries digital acquisition programme will be a fully-connected, end-to-end virtual environment that will produce an almost perfect replica of what the physical weapon system will be.”[69] The first aircraft to receive the designation was the Boeing eT-7A Red Hawk, the service’s next-generation jet trainer, which was designed completely virtually using model-based engineering and 3D design tools. In other words, it’s the first aircraft to be fully designed, modeled, and tested using (super)computers.[70] USAF noted that digital engineering allowed Boeing to “seamlessly transform its schematic into a metal aircraft” with few time-consuming design errors, 80 percent fewer assembly hours were needed compared with a conventional development method, and the jet “moved from computer screen to first flight in just 36 months.”[71]

Lastly, the episode highlights one other dynamic about why HPC matters immensely to U.S. industrial competitiveness. In procurement, aircraft (and engine) manufacturers, in both the civilian and military domains, often guarantee a set of performance attributes for their customers four or five years before the first plane (or engine) ever even flies. HPC-powered computational tools are indispensable to ensuring accurate modeling with a high degree of confidence that the end product will reliably meet the customers’ requirements, at a profitable price point for the manufacturer. In other words, the effective application of HPC is integral to the competitiveness of America’s aerospace industry.

Jet Engine Design and Manufacturing

HPC is also playing an integral role in facilitating the next generation of jet engine design. CFM, a joint venture between General Electric (GE) and Safran S.A., are currently working to develop a next-generation, open-fan engine design via the Revolutionary Innovation for Sustainable Engines (RISE) jet engine program. (See figure 9.) RISE has a goal of achieving a 20 percent reduction in fuel burn along with a corresponding reduction in carbon emissions (near elimination being possible as hydrogen becomes a feasible fuel source), further building upon the 15 percent increase in fuel efficiency CFM achieved with its current-generation LEAP (Leading Edge Aviation Propulsion) engine.[72]

Figure 9: Evolution of GE/CFM jet engines[73]

image

Like Boeing, GE needs to master CFD, so it has partnered with national labs such as LLNL and ORNL to develop high-resolution turbulence models to help design RISE with improved aerodynamic performance, durability, and fuel efficiency. The goal is to accurately simulate air flows and their turbulence under realistic operating conditions for the engine. As Arthur explained:

When we started simulations years ago, we could only simulate airflows between one pair of blades of a turbine at a time, which was of limited use because many crucial machine dynamics encompass more than that small area. As we have scaled up over time, first to multiple blades, then to multiple rows of blades, and then to the multiple stages of the engine, to the full annulus [i.e., the entire circumference], we have surpassed the needed thresholds to gain insight into these dynamics and perhaps ultimately one day will be able to perform simulations of the entire engine.[74]

While GE possesses in-house HPC resources, conducting end-to-end, full annulus M&S of new engines will require massive computational power, in part because of the variety of flight conditions that must be accounted for—different atmospheric, altitude, and weather conditions; different flight and lift conditions (e.g., takeoff vs. cruise vs. landing), etc.—so GE has partnered with ORNL in the past to use its 200-petaflop Summit supercomputer, and going forward will run M&S on Frontier as well. Thus, HPC is instrumental in informing engine design to navigate the vast variety of design options and operating conditions and provide designers with the greatest chance of designing an optimal engine before one actually gets fabricated. Arthur noted, “One can’t test a jet engine [let alone a jet aircraft or wind turbine] in a wind tunnel, since those facilities are smaller than the actual product. So companies build a scale-miniaturized version of the product and a ‘rig test‘ is performed that provides initial performance data, although the product is not tested at scale until the flight test.”[75] And just like in the aircraft example, jet engine makers are selling new engines based on modeled performance specifications before the first production unit is ever manufactured. Here, Arthur has observed what a difference maker exascale can be: “To run the takeoff test simulation at the rig-test scale, that simulation would be about 70,000 node hours on Summit … that same simulation at product scale would be over 6 million node hours on Summit. So we love Frontier” because at exascale on Frontier vs. 200 petaflops on Summit, it will allow GE to run such simulations roughly five times faster, and potentially more thereof.[76] As GE pushes toward the next generation of hydrogen-powered, open-fan jet engines as envisioned in RISE, HPC will play an indispensable role.

As of November 2015, the United States fielded 199 of the world’s top 500 supercomputers compared with China’s 109; however, by June 2022, China had flipped the script, fielding 173 of the 500 fastest supercomputers to the United States’ 128.

And indeed, more fuel-efficient (and cleaner-burning) engines are a differentiator in the marketplace. As Hyperion Research’s Earl Joseph, Steve Conway, and Bob Sorensen have explained, each year, about $200 billion worth of fuel is consumed globally in GE’s gas turbine products, including aircraft engines and land-based gas turbines used for the production of electricity.[77] Every 1 percent reduction in fuel consumption therefore saves the users of these products $2 billion combined per year, and “any company that can achieve even 1% improvement in efficiency can potentially cause market disruption, as the resultant efficiencies would overtime [sic] add up to enormous cost savings to customers, thus providing market advantage.”[78]

Elsewhere, LIFT—one of America’s 16 Manufacturing USA Institutes focused on developing and deploying advanced lightweight materials manufacturing technologies—has leveraged HPC to develop advanced materials supporting the development of lighter and thus more energy-efficient jet engines. Specifically, LIFT has partnered with LLNL to “[e]valuate stress/strain behavior of aluminum and Al-Li alloy for different lithium content, shapes, and volumes” and “model validation of dislocation mobility, a property of alloys under stress.”[79] LIFT estimated that over 13 million gallons in jet fuel can be saved per year industry-wide by using Al-Li alloys.[80]

HPC Enabling Automotive Innovation and Mobility Solutions

As Automotive World’s Alyssa Altman wrote, “The future of the automotive industry relies on the ability to leverage HPC.”[81] Of course, HPC has long been used to design more fuel-efficient vehicles. For instance, South Carolina-based BMI Corp. has developed SmartTruck technology using supercomputer resources from ORNL that could save 1.5 billion gallons of diesel fuel and $5 billion in fuel costs per year.[82] As BMI CEO Mike Henderson explained, “We were able to run simulations based on the most complex tractor and trailer models instead of simplified models, and we were able to run them faster.”[83] Specifically, BMI used HPC to improve the aerodynamics of 18-wheel (Class 8) long-haul trucks, with the typical big rigs achieving fuel savings of between 7 and 12 percent.[84] Moreover, as in aerospace, access to HPC “shortened the computing turnaround time for BMI’s complex models from days to a few hours and eliminated the need for costly and time-consuming physical prototypes” allowing BMI to go from concept to a design that could be turned over to a manufacturer in 18 months instead of the 3.5 years it had originally anticipated.[85]

As auto manufacturers now look to design a new generation of connected and autonomous vehicles (CAVs), “HPC fosters the ability to render a model quickly, create prototypes remotely and design virtual crash tests.”[86] Indeed, developing and training CAVs and simulating more efficient traffic flows at scale requires the power and performance only HPC can provide.[87] As Altman wrote, “Without HPC, there is no way to accelerate the data-intensive process of the vehicles’ response systems.”[88] In other words, HPC will play a key role in helping design CAVs that reduce accidents caused by human error and help reduce congestion (by being able to communicate with other vehicles and transportation infrastructure). HPC will also play a role in facilitating the deployment of intelligent transportation systems more broadly. For instance, the University of Michigan has invested in a supercomputer that supports ML applications to enable researchers in its MCity program, which develops intelligent transportation systems, to perform more complex simulations and better train DL models to recognize signs, pedestrians, and hazards.[89]

HPC Enabling Consumer Packaged Goods Innovation

CPG companies such as Procter and Gamble (P&G) leverage supercomputing to understand formulations down to the molecular level across a wide range of products such as cosmetics, shampoos, soaps, and diapers, thereby improving product quality and performance, in part because HPC helps companies identify molecular characteristics not observable experimentally.[90] At P&G, HPC exerts significant impact on process design and optimization, material selection, product and package design, and supply chain optimization. As Alison Main, senior director of R&D at P&G, explained, HPC-powered M&S “is now the way we do work … [from] fragrance optimization and formula design to assessing fit and fluid absorption for feminine hygiene products.”[91] P&G maintains its own supercomputing hardware, but also partners with U.S. national labs such as LANL and LLNL when it requires additional computing power. As Main explained, “Over the last decade, HPC has helped P&G save over $1 billion through replacing physical experiments, optimizing equipment design, increasing production capacity, and qualifying more efficient materials.”[92] In one case, P&G’s use of simulation and modeling allowed it to reduce the number of steps involved in a process design by over 50 percent.[93]

HPC-powered M&S helps P&G design, formulate, and fabricate a range of products from fragrances and shampoos to hygiene products, while also helping P&G save over $1 billion in costs over the past decade.

One celebrated application of HPC-powered M&S at P&G was understanding the optimal way paper fibers should contact one another (e.g., in paper towels or toilet paper) so as to maximize papers’ texture, absorption, softness, and fit within packaging. P&G partnered with LLNL to develop a large, multiscale model of paper products that simulated thousands of fibers with a resolution to the micron scale, with a project goal of reducing the paper pulp in P&G products by up to 20 percent.[94] To do so, LLNL developed a parallel computing program called “p-fiber,” which could quickly prepare the fiber geometry and meshing input data needed to simulate thousands of fibers. In the model, each individual paper fiber was represented by as many as 3,000 “bricks” or finite elements and the model generated up to 20 million finite elements and modeled 15,000 paper fibers.[95] P&G partnered with LLNL on the project not just for the computational power of the labs’ supercomputers, but also for their speed. As LLNL researcher Will Elmer explained, “We found that you can save on design cycle time. Instead of having to wait almost a day (19 hours), you can do the mesh generation step in five minutes. You can then run through many different designs quicker.”[96] In total, LLNL was able to run design simulations up to 225 times faster than meshing the fibers sequentially on P&G’s computer.[97] The effort yielded important insights into the structure of fibers and papers, including into their texture, durability, and tearability.

Main further explained, “In addition to product design and downstream product and process validation, like drop testing and package conveying, P&G also uses HPC to build our foundational understanding of chemical and substrate behavior which informs our innovations of the future.”[98] Main noted that P&G is partnering with the Sandia National Laboratory to optimize its process to manufacture porous materials to guide design and reduce the energy required to make sustainable substrates for paper, feminine hygiene, and other absorbent products. P&G has also worked with LLNL to develop methods that leverage HPC to couple disparate time and length scales in molecular simulations.[99] These methods have enabled the creation of models of small molecules interacting with bacterial membranes to inform the development of new antimicrobial chemistries to enhance the performance of P&G’s antibacterial products.[100] As Main concluded, “Going forward, HPC will help P&G innovate with increasingly complex natural and recycled materials, bring new products to market faster, and enable us to explore more possibilities than we could dream of imagining with physical experimentation alone.”[101]

HPC Enabling U.S. Life Sciences Innovation

Exascale-era HPC promises to unleash a wide range of new biomedical discoveries and innovations and is already making tremendous contributions in oncology research and drug discovery. HPC also played a pivotal role in helping the global biomedical community tackle the COVID-19 pandemic. HPC has long been used in computational drug discovery and design wherein techniques such as molecular simulation can help model a biological target associated with a disease and identify drugs that might effectively bind to those targets, while also achieving a desired therapeutic outcome. But when the range of possible drug compounds is large, this process can take very long, with the costs of running many simulations high, slowing down the creation of life-saving drugs. HPC-enabled ML can complement this process by initially screening the known range of drug candidates to focus testing and simulation only on those with the right features to be successful, with a 2019 GAO study estimating ML can produce R&D costs savings of $300 million to $400 million per successful drug by accelerating drug discovery.[102] The following section examines how HPC has facilitated COVID-19 vaccine and therapeutic development, progressed oncology and Alzheimer’s drug research and innovation, and helped make gene sequencing possible.

COVID-19

During the COVID-19 pandemic, “nearly every public research supercomputer pivoted to some form of COVID research.”[103] For instance, weeks into the pandemic, an ORNL supercomputer, Summit, was tapped to run simulations of over 8,000 drug compounds to identify those most likely to prevent the virus from infecting host cells.[104] The ORNL team identified 77 compounds that represented promising candidates for testing by medical researchers. As Jeremy Smith, director of the University of Tennessee/ORNL Center for Biomolecular Physics and principal researcher for the study explained, “Summit was needed to rapidly get the simulation results we needed. It took us a day or two whereas it would have taken months on a normal computer.”[105] The overall U.S. effort to leverage HPC in the pandemic fight was spearheaded by the COVID-19 HPC consortium, a unique public-private partnership between government, industry, and academia that provided a single point of access to the nation’s HPC and cloud-computing systems as well as a review mechanism to evaluate research proposals and ensure the most-promising ones were prioritized given both the high volume of research proposals submitted and limited supercomputing capacity.[106]

The consortium quickly reviewed over 200 research proposals and got 114 up and running on the nation’s supercomputing infrastructure.[107] Some of the notable outcomes included the following highlights:

Experimental testing of predicted compounds proposed by University of Tennessee researchers led to the discovery of several new inhibitors of viral proteins and the identification of three already-approved drug compounds that inhibit the infectivity of live coronavirus, with two of the identified compounds subsequently entering clinical trials.[108]

Michigan State researchers used an IBM supercomputer to perform analysis and predictive modeling of potential SARS-CoV-2 mutations and their impact on diagnostic testing, vaccines, and therapeutics (leading to several journal publications).[109]

Applying a new transcription-based drug-pair synergy approach, Mt. Sinai researchers used supercomputers to model 35 billion predictions to identify 10 drug pairs predicted to target the COVID-19 protein interactions.[110]

While scores more examples exist, suffice is to say that supercomputing represented an indispensable part of America’s COVID-19 response and undoubtedly contributed to one of the most rapid developments of vaccines and therapeutics in human history.

HPC and ML can work in tandem to identify promising molecular compounds to treat diseases, with one study finding the twain can produce R&D costs savings of $300 million to $400 million per successful drug by accelerating drug discovery.

For Pfizer, supercomputing proved instrumental in the development of both its COVID-19 vaccine as well as its COVID-19 therapeutic. As Lidia Fonseca, executive vice president and chief digital and technology officer for Pfizer, explained:

Supercomputing helped us to fast-track the progression from discovery to development for Paxlovid, our oral treatment. Using sophisticated computational modeling and simulation techniques, we can now test molecular compounds in a virtual rather than physical lab environment. In the case of Paxlovid, this enabled us to test a fraction of the millions of known compounds that might have worked to treat COVID-19 so that we could quickly narrow down to just those compounds that had the highest chance of becoming medicines.[111]

As Vassilios Pantazopoulos, head of scientific computing and HPC, at Pfizer elaborates, “HPC-powered, large-scale modeling and simulation can impact biomedical innovation all the way from the earliest-stage research through to clinical trials, and one reason exascale computing matters it that it can enable an exponential speed-up in the ability to tackle research problems that were heretofore intractable due to the extent of computational power needed to develop models (i.e., of molecules or proteins) of the scale and size required for them to be accurate and useful for research scientists.”[112] Pantazopoulos notes that Pfizer ran millions of simulations in its drug discovery efforts over the past year, the vast majority of which were focused on the development of COVID-19 vaccines and therapeutics. He notes the company employs a large number of computational scientists who leverage supercomputing capabilities to accelerate the drug discovery process on a daily basis.

Fonseca further elaborated on how supercomputing and advanced analytics helped facilitate development of the COVID-19 vaccine: “Many of the allergic reactions that clinical trial participants reported while testing our vaccine resulted from certain lipid nanoparticles in the vaccine itself. Using supercomputing, we ran molecular dynamics simulations to find the right combination of lipid nanoparticle properties that reduce allergic reactions, thereby creating as safe and effective a vaccine as possible.”[113]

In addition to simulating nanolipid particles to come up with the best set of properties to reduce allergic reactions, molecular simulations also proved useful in making the mRNA vaccine more resilient to temperature changes, which enhanced the storability, transportability, and accessibility of the vaccines. Beyond design of the vaccine, large-scale fluid dynamic simulations also made important contributions in helping Pfizer to optimize and scale-up its vaccine manufacturing process. While Pfizer’s development of COVID-19 vaccines and therapeutics offers just one salient and compelling example, HPC-powered M&S plays a role in supporting the development of many of the nearly 100 other innovative drugs Pfizer is developing in its pipeline.[114]

Cancer Research and Detection

From private enterprises to universities to U.S. government agencies such as DOE and the National Institutes of Health (NIH), HPC is facilitating cancer research and drug discovery. For instance, DOE and NIH’s National Cancer Institute (NCI) have launched a joint effort to develop new therapies and improve the ability to detect cancers at an earlier stage.[115] The DOE-NCI collaboration seeks to bring the power of HPC to bear on three specific areas of cancer research:

1. Cellular-level: advance the capabilities of patient-derived pre-clinical models to identify new treatments

2. Molecular-level: further understand the basic biology of undruggable targets

3. Population-level: gain critical insights on the drivers of population-level cancer outcomes.[116]

Their collaboration particularly targets one biomolecular protein family—RAS genes—mutations in which cause an estimated 30 percent of cancers and are particularly prevalent in those of the lung, colon, and pancreas. While RAS mutations have been studied for decades, no RAS inhibitors exist, in large part because scientists have lacked a detailed molecular-level understanding of how RAS genes engage and activate proximal signaling proteins. (See figure 10.) That’s in part because “RAS signaling takes place at and is dependent on cellular membranes, a complex cellular environment” which is hard to model using conventional techniques.[117]

Figure 10: Simulation capturing the molecular details of RAS genes in complex lipid membranes[118]

HPC-powered ML applications are now being deployed to generate multiscale physical simulations to provide a more-realistic view of RAS cancer biology, providing a nice example of the modeling complexity at play and the value of HPC.[119] As Bhattacharya et al. described in “AI Meets Exascale Computing: Advancing Cancer Research With Large-Scale HPC”:

The principal challenge in modeling this system is the diverse length and timescales involved. Lipid membranes evolve over a macroscopic scale (micrometers and milliseconds (ms)). Capturing this evolution is critical, as changes in lipid concentration define the local environment in which RAS operates. The RAS protein itself, however, binds over time and length scales which are microscopic (nanometers and microseconds). In order to elucidate the behavior of RAS proteins in the context of a realistic membrane, our modeling effort must span the multiple orders of magnitude between microscopic and macroscopic behavior.[120]

Leveraging supercomputing assets from DOE and the National Nuclear Security Administration (NNSA) and working with experimentalists at the Frederick National Laboratory, the research team developed a macroscopic model that captures the evolution of the lipid environment and is consistent with an optimized microscopic model that captures protein-protein and protein-lipid interactions at the molecular scale.[121] The researchers were able to simulate at the macroscopic level a 1 x 1μm (a micrometer, or one-millionth a meter), 14-lipid membrane with 300 RAS proteins, generating over 100,000 microscopic simulations capturing over 200 ms of protein behavior. As the report notes, “This unprecedented achievement represents an almost two orders of magnitude improvement on the previously state of the art.”[122] But as the researchers noted, with exascale machines “we can substantially increase the dimensionality of the input space and its coverage,” and going forward they will use supercomputers “to include fully atomistic resolution, creating a three-level (macro/micro/atomistic) multiscale model” and “incorporate membrane curvature into the dynamics of the membrane.”[123]

HPC and AI are also working in concert to facilitate real-time cancer surveillance in the United States. NCI started the Surveillance, Epidemiology, and End Results (SEER) in 1973 to collect and publish cancer incidence and survival data from population-based cancer registries (covering 35 percent of the U.S. population) with the goal of facilitating data-driven discovery to understand drivers of cancer outcomes in the real world.[124] Using AI-driven natural language processing (NLP) tools running on HPC, researchers have been able to accurately classify all five key cancer data elements—cancer site, laterality, behavior, histology, and grade—for 42.5 percent of cancer cases.[125] It’s a significant step toward the goal of achieving real-time cancer surveillance and facilitating data-driven M&S of patient-specific health trajectories to support precision oncology research at the population level.

Alzheimer’s Disease

Mental and neurological disorders and diseases cost the U.S. economy more than $1.5 trillion per year—as much as 8.8 percent of U.S. GDP.[126] The financial impact of Alzheimer’s disease alone is expected to soar to $1 trillion per year by 2050, with much of the cost borne by the federal government, according to the Alzheimer’s Association report “Changing the Trajectory of Alzheimer’s Disease.”[127] However, the United States could save $220 billion within the first five years and a projected $367 billion in the year 2050 alone if a cure or effective treatment for Alzheimer’s disease were found.

Supercomputing is stepping in to help meet the challenge. Researchers at the San Diego, California-based Salk Institute are using supercomputers to investigate how synapses in the brain work, their research focusing on the actions that occur when neurons send chemical messages along synapse pathways to other neurons. Essentially, they’re studying the release of neurotransmitters, their diffusion across synapses, and how they bind to receptors. Using supercomputers, researchers achieved a 150-fold speedup of simulations of ever-increasing complexity.[128]

Elsewhere, in 2021, UCLA and Johns Hopkins University scientists released research that examined 47,000 images of human brains to study brains’ thinning during the early stages of Alzheimer's and how it impacts mild cognitive impairment. As Daniel Tward, an assistant professor of computational medicine and neurology at UCLA, explained, “Until now, we haven’t been able to measure these changes in living people. By using supercomputers like Comet at the San Diego Supercomputer Center at UC San Diego and Stampede2 at Texas Advanced Supercomputing Center, we were able to study a large cohort of patient images over time.”[129] The researchers used supercomputers to observe and quantify thinning in the transentorhinal cortex, which is located in the temporal lobe of the brain and is believed to be the first area impacted by Alzheimer’s disease (although until now this could not be diagnosed until autopsy results were available). The research could represent a critical breakthrough to provide early diagnosis of Alzheimer’s. As Tward noted, using supercomputers “[reduced] computation time from months to days [and] allowed this complex neuroimaging project to be feasible.”[130]

Genome Sequencing

The role of supercomputing in unlocking the secrets of the human genome has been profound. In 2010, an epochal case unfolded involving Nicholas Volker, a four-year old boy suffering from a mysterious, unknown illness that repeatedly attacked his intestines and proved untreatable with existing approaches to known ailments such as Chron’s disease (a type of inflammatory bowel disease).[131] In a near last-ditch effort to save Nicholas’s life, doctors sequenced his DNA in the hopes of identifying the heretofore unknown genetic mutation, making Nicholas one of the first humans to have his genome sequenced for the express purpose of identifying a disease and one of the earliest examples of personalized genomic medicine. (Only 1 percent of Nicholas’s genome was actually sequenced, at a cost of $75,000, as doctors focused on exons, the part of each gene that contains the recipe for making proteins. Sequencing his whole genome would have cost $2 million at the time and taken months.) The DNA sequencing identified 16,142 variations, sections in which Nicholas’s pattern of DNA base pairs differed from the norm.[132] With the help of supercomputers and a novel software tool, doctors honed in on eight leading (gene mutation) suspects, “and examined them in detail by searching medical literature and gene functions.”[133] Doctors finally narrowed down the culprit to a mutation of the gene XIAP, found on the X chromosome, whose role is to block a process that makes cells die and helps prevent the immune system from attacking the intestines. The rest of humanity has the sequence thymine-guanine-thymine, which produces the amino acid cysteine, but in Nicholas’s case, the single base-pair mutation led to thymine-adenine-thymine, which created a wholly different amino acid, tyrosine, which precluded the protein that Nicholas’s XIAP should have created from performing its job of protecting the intestines from immune-system attack. Armed with this knowledge, doctors successfully treated Nicholas with high-dose chemotherapy and an umbilical cord blood infusion.[134] It represented one of the first cases in history where advanced computational tools such as HPC and gene sequencing were used to help unearth a heretofore unknown disease and identify an effective treatment.

As Hans Hofmann, director of the Center for Computational Biology and Bioinformatics at the University of Texas, summarized, “In the life sciences, none of the recent technological advances would have been possible without supercomputers.”[135] He noted that sequencing the human genome in the first instance took eight years, thousands of researchers, and about $1 billion; today, researchers can sequence a person’s entire genetic code, about three billion base pairs, for $600 in a matter of hours (with the $100 genome not far behind).[136] And while, just like an AI-based voice recognition system operating on one’s smartphone doesn’t need a supercomputer to operate it, no one’s genes can be sequenced cheaply today without the need of a supercomputers, this doesn’t discount the instrumental role HPC has played in developing the algorithms and knowledge base to achieve this reality.

Clean Energy Innovation

Exascale-era HPC will empower advancements in numerous areas of clean-energy innovation, from the optimization of both wind turbine and wind farm design and construction to management of smart electric grids.

Wind Energy

The ExaWind initiative seeks to leverage HPC technologies in support of the ambition of having renewable, inexhaustible wind energy resources account for as much as 20 percent of U.S. energy needs over the next 10 years.[137] But achieving wide-scale wind energy deployment will depend upon understanding, predicting, and reducing plant-level energy losses from a variety of physical flow phenomena and therefore “requires the ability to predict the fundamental flow physics and coupled structural dynamics that govern whole wind plant performance, including wake formation, complex-terrain impacts, and turbine-turbine interactions through wakes.”[138] In other words, it will require understanding how to design the most energy-producing wind turbines—even those capable, such as a plant reacting to the movement of the sun, of adapting in real time to changes in wind speed and direction—as well as how to design entire wind farms in the most effective manner (i.e., such that the wake disturbance from turbines in one wind farm doesn’t undermine the energy-producing power of others). That’s a significant challenge, because downstream wake effects can spread over an area of multiple kilometers, and decrease the efficiency of a downstream wind turbine by up to 40 percent, while increasing the “out-of-plane” load (meaning the external pressure exerted on turbine blades) by up to 40 percent as well.[139] Success of the ExaWind initiative will require development of an “M&S capability that resolves turbine geometry and uses adequate grid resolution (down to the micrometre scale) … [while also modeling the effect of] atmospheric turbulent eddies and generation of near-blade vorticity and propagation and breakdown of this vorticity, within the turbine wake” to a significant distance downstream.[140]

Wake effects from a wind turbine can decrease the efficiency of downstream turbines by as much as 40 percent, meaning HPC can play a pivotal role in informing not just the design of wind turbine blades but also the positioning of wind turbines, and indeed entire wind farms, relative to one another.

To this end, GE is partnering with ORNL to leverage Summit’s supercomputer-driven simulations to improve efficiencies in offshore wind energy production.[141] As GE research aerodynamics engineer Jing Li explained, “The Summit supercomputer will allow GE’s team to run computations that would otherwise be impossible … [thus] supporting research [that] could dramatically accelerate offshore wind power.”[142] The GE-ORNL collaboration focuses in particular on “study[ing] coastal low-level jets, which produce a distinct wind velocity profile of potential importance to the design and operation of future wind turbines.”[143] As Li explained, “We’re now able to study wind patterns that span hundreds of meters in height across tens of kilometers of territory down to the resolution of airflow over individual turbine blades … [which allows us] to understand poorly understood phenomena like coastal low-level jets in ways previously not possible.”[144]

GE Research has also partnered with ORNL to leverage Summit to model how complex flow characteristics impact the performance of gas turbines to improve their design. That matters when each 1 percent increase in gas turbine efficiency translates to an emissions reduction of 17,000 metric tons of carbon dioxide, equivalent to taking 3,500 automobiles off the road. Just as with jet engines, GE leverages HPC to construct full-annulus, end-to-end M&S of the functioning of its gas turbines. As Arthur explained, “Due to computational limitations on what we could simulate, we used to examine one combustor at a time, but with leadership-class HPC we were able to model multi-combustor dynamic interactions.”[145] One significant result of the M&S analyses was insight into thermoacoustics causing an instability in a next-generation turbine design, which led to an energy-producing efficiency-enhancing redesign of the turbine.

As Michal Osusky, a project leader in GE Research’s Thermosciences group, concluded, “We’re able to conduct experiments at unprecedented levels of speed, depth and specificity that allow us to perceive previously unobservable phenomena in how complex industrial systems operate.”[146] Like P&G, GE is quick to emphasize the importance of its partnerships with the national labs, noting, “Prior collaborations [with ORNL] … [have led to] significant improvements in combined cycle power plant efficiency, wind energy output and jet engine performance.”[147] In fact, by 2017, GE had already used 1 billion core hours on the national lab machines across a number of the labs.[148]

Smart Grid Optimization

Supercomputers will be used in conjunction with AI/ML to optimize the design and operation of America’s smart electric grids going forward. Power grids operate by maintaining balance between electricity supply—arriving from a wide variety of fossil, nuclear, or renewable-based sources—and demand affected by households, businesses, and factories. Demand-supply imbalances “can result in large-scale blackouts and/or permanently damage very large and expensive components.”[149] Exascale capabilities may empower grid operators “to optimize power grid response in a near-term timeframe (e.g., 30 minutes) to a variety of underfrequency hazards via physical and control threat scenarios using comprehensive modelling that includes generation, transmission, load and cyber/control elements.”[150] As Mihai Anitescu, a senior computational mathematician in Argonne National Laboratories Mathematics and Computer Science Division, explained, power grid managers will need to bring HPC to bear on “a whole new set of mathematical challenges [that] arise—from predicting and mitigating damaging events to modeling the uncertainties associated with dynamic power system behavior.”[151]

Defense and Environment-Oriented Applications of HPC

Supercomputers play an indispensable role in helping meet a wide range of defense-, mission-, and social-oriented priorities across a number of domains from national security to weather, climate, earthquake, and hurricane forecasting. The following section explores the use of supercomputing in some of these environments, though even these examples barely scratch the surface of how HPC is being used in other fields from astrophysics to particle physics.

Nuclear Stockpile Stewardship

Supercomputers play a vital role in supporting U.S. national security, and particularly so when it comes to America’s nuclear security posture. In fact, one reason America needs assurance of access to the world’s best supercomputers is it simply gives U.S. defense planners a competitive edge and allows the U.S. defense industrial system to design leading-edge weapons systems and national defense applications faster than anyone else can. HPC contributes to everything from helping to ensure the safety and operational capacity (e.g., nuclear stockpile stewardship) of the arsenal to helping evaluate, test, and certify new nuclear weapons. Put simply, supercomputers help the DOE’s NNSA “maintain confidence in the nation’s nuclear weapons.”[152]

Since the United States ceased underground nuclear testing in 1992, American scientists have developed advanced application codes and software “that can simulate how the nuclear stockpile would work without physical testing” by enabling modeling of nuclear explosions at the requisite “scale, density, and temperatures.”[153] Supercomputers allow scientists “to attempt to create a realistic model of what happens inside a nuclear explosion,” with one study modeling the behavior of 9 billion individual atoms in an atomic explosion in an analysis that took over a week and used 212,000 microprocessors.[154] In 2011, supercomputers at LLNL revealed a weakness in America’s process for storing and maintaining nuclear weapons that could have led many of them to “fail catastrophically” if ever needed for use.[155]

LLNL will take delivery of America’s third exascale-capable supercomputer, El Capitan, in 2023, and it will play an important role in validating new weapons platforms such as the Sentinel, which will replace the Minuteman III intercontinental missile, without requiring live testing of the weapon. Researchers will use El Capitan to “run tests such as multiphysics and multidomain codes, which model the effects on the weapons under different conditions such as heat and cold and scenarios such as detonation and explosion.”[156] In short, HPC enables the United States to model and understand the effectiveness, yield, and explosive capability of nuclear weapons without having to break the nuclear test ban treaty.

Weather and Climate Forecasting

The advent and evolution of HPC has transformed weather and climate forecasting over the past decade, contributing to the development of higher-resolution, more-accurate, more-timely, and longer-term forecasting. Weather models predict conditions for the next 10 days or couple weeks, but climate models predict trends over much longer periods of time. As Phil Webster, director of NASA’s Center for Climate Simulation, explained, “Rapid increases in computing power makes the models ever-more powerful and sophisticated, allowing us to simulate our complex environment in greater detail.”[157]

Exascale-era supercomputing has the potential to bring weather and climate forecasting into a new era. Specifically, the role played by clouds is incredibly important in understanding weather and climate and needs to be modeled accurately and intricately. Yet, up until now, the atmospheric models in use have been relatively coarse resolution, meaning the spacing between the points at which observations are collected in the atmosphere have been so far apart that clouds fall in between these grid points, so an approximate parametrization has been used to model the effect of clouds. Exascale systems will allow researchers to run climate models at “cloud-resolving resolutions” in a production function wherein one is simulating decades or centuries at a time. The proliferation of global climate sensors further contributes to ever-larger datasets against which ever-more sophisticated, computationally intense models can be run to understand both the essential ocean, air, land, ice, and cloud interactions that drive global weather and climate and unique patterns driving weather and climate from tropical to polar regions.

With regard to weather specifically, historically, forecasters have been able to effectively deliver limited-area model forecasts, meaning a reliable forecast for a small country (e.g., the United Kingdom or New Zealand) or a small state. The next step, which exascale HPC will facilitate, will be creating an Earth model that accurately represents and reliably predicts weather as a global system at a very high level of resolution (and a better picture of what’s happening globally will in turn lead to better local predictions). Already here, America’s introduction of its high-resolution rapid refresh (HRRR) model—a real-time three-kilometer resolution, hourly updated, cloud-resolving, convection-allowing atmospheric model—has paid dividends in terms of providing more accurate, real-time weather forecasts. (It recently helped save lives by predicting unusually high winds driving California wild fires in the summer of 2022).[158] Related to this will be extending the length of fairly reliable weather forecasts. That is, currently, forecasters do a fairly good job predicting the weather out over the next week, but they’re constrained in providing accurate forecasts six to eight weeks out (as this timeframe starts to hit the weather/climate interface).

Exascale systems will allow researchers to improve their climate and weather models by including “cloud-resolving resolutions” at much greater levels of granularity and fidelity.

Expanding the availability of supercomputing assets to facilitate weather and climate understanding has long been a priority for researchers. A December 2021 report on “Priorities for Weather Research” by the National Ocean and Atmospheric Administration’s (NOAA) Science Advisory Board calls for the United States to “[e]xpand high performance computing capacity by two orders of magnitude (over ten years) to support operational forecasts and data dissemination and provide critically lacking capacity in U.S. weather research.”[159] It notes that “HPC shortfalls and requirements have been highlighted in many of this report’s recommendations.” It adds, “Without sufficient HPC investments, the loss of potential advancements is tremendous and cannot be overstated.” The report concludes, “From an operational NWP [(numerical weather prediction)] () perspective, a four-fold increase in model resolution in the next ten years (sufficient for convection-permitting global NWP and kilometer-scale regional NWP) requires on the order of 100 times the current operational computing capacity.”[160]

The United States made considerable progress on this front in June 2022, when NOAA launched two new supercomputers (Dogwood and Cactus, at 12.1 petaflops, then-rated the world’s 49th and 50th fastest supercomputers, respectively). As NOAA administrator Rich Spinrad explained, “More computing power will enable NOAA to provide the public with more detailed weather forecasts further in advance.” [161] In particular, NOAA explained, “Enhanced computing and storage capacity will allow NOAA to deploy higher-resolution models to better capture small-scale features like severe thunderstorms, more realistic model physics to better capture the formation of clouds and precipitation, and a larger number of individual model simulations to better quantify model certainty.”[162] The twin new supercomputers will support upgrades to the U.S. Global Forecast System (GFS) as well as a new hurricane forecast model called the Hurricane Analysis and Forecast System (HFAS).

These investments matter, for as U.S. Secretary of Commerce Gina Raimondo explained, “Accurate weather and climate predictions are critical to informing public safety, supporting local economies, and addressing the threat of climate change.”[163] Indeed, the United States is currently experiencing approximately six times as many billion-dollar weather and climate disasters per year than it did in the 1980s.[164] Extreme rainfalls are becoming more frequent as global temperatures rise, with a 1-in-100-year storm in Washington, D.C., expected to become a 1-in-25-year event by mid-century—four times more likely—and a 15-year storm by 2080.[165] More accurate weather and climate predictions will be important not only to alerting citizens and getting them to safety during storms, but also to helping cities understand the need to upgrade drainage and sewer infrastructure (one study finds that only one-half of large U.S. cities have made the infrastructure improvements needed to adapt to heavier rainfall).[166]

Better weather and climate forecasting will also play an important role in facilitating the advent of smart agriculture, the application of IT such as AI, Internet of Things, big data, and smart drones and tractors to increase agricultural yield, productivity, and sustainability. That matters especially when the Food and Agriculture Organization estimated that by 2050 global society will need to produce 60 percent more food than it currently does to feed a world population of 9.3 billion.[167] To meet that challenge, the Global Agricultural Productivity Index estimates global agricultural productivity needs to increase at an average annual rate of 1.73 percent, but it finds that agricultural productivity growth in low-income countries is rising at an average annual rate of just 1 percent.[168] Accurate weather forecasts are indispensable to increasing agricultural productivity, informing decisions from what crops to plant, how and when to optimally fertilize and irrigate, and when to harvest, meaning HPC will play an important role in helping boost global agricultural productivity and meet the world’s food needs.[169]

Earthquake Forecasting

Because earthquakes occur infrequently and as a result of complex geological factors operating deep underground, forecasting earthquakes effectively relies on massive computer models and multifaceted simulations that recreate the rock physics and regional geology and therefore require big supercomputers to execute.[170] As Paul Johnson, a seismologist at LANL explained, “Forecasting the instantaneous behavior of faults requires massive amounts of geophysical data to create machine learning models [using the help of supercomputers].”[171]

In 2017, researchers from the U.S. Geological Survey and the Southern California Earthquake Center at USC Dornsife College of Letters, Arts and Sciences (SCEC) released a flagship paper in Seismological Research Letters explaining how they’d used supercomputers (notably the Stampede2 supercomputer) to develop at the time “the most advanced earthquake forecast model in the world.”[172]

The team’s forecast model became the first fault-based model to provide self-consistent rupture probabilities from the very short term (over a period of less than an hour) to the very long term (up to more than a century). It is also the first model capable of evaluating the short-term hazards that result from multi-event sequences of complex faulting. To create the model, the researchers ran 250,000 rupture scenarios of the state of California—vastly more than in the previous model—which simulated 8,000 ruptures.[173] The research also broke new ground in predicting the likelihood of follow-on earthquakes, finding that the week following a magnitude 7.0 earthquake, the probability of another of similar magnitude would be as much as 300 times greater than normal.

As Yehuda Ben-Zion of the SCEC explained, “[HPC is helping to dramatically improve] our understanding of many earthquake phenomena. It’s clarifying how often earthquakes of varying magnitudes are expected in different regions, and how factors such as the direction that the fault ruptures and wave resonance in sedimentary basins, where loose rocks and soil settle over millions of years, all combine to increase ground motion at certain locations.”[174] Importantly, more-accurate seismic simulations can be used to generate advanced, regionally focused earthquake hazard maps, with the potential saving of many lives and properties. For instance, California is working to complete a statewide seismic hazard map, and ultimately state- and national-level seismic hazard maps will help set building codes and insurance rates while advancing public safety.[175]

Policy Recommendations

This report offers the following policy recommendations to keep the United States at the leading edge of HPC production and application.

Fully appropriate authorized HPC-related investments and programs in the CHIPS and Science Act of 2022. The CHIPS and Science Act authorizes $280 billion to advance U.S. scientific research and industrial competitiveness.[176] Unfortunately, it only fully appropriates the roughly $80 billion in CHIPS legislation to support the U.S. semiconductor industry (roughly $39 billion in incentives; $13 billion in R&D; and $25 billion in R&D tax credits); meaning the remaining roughly $200 billion for science and research funding will need to be authorized in FY 2023 budgets and beyond. In particular, the legislation calls for a roughly 40 percent increase in funding for Advanced Scientific Computing Research (ASCR) program activities, increasing annual ASCR funding from $1.03 billion in FY 2021 to $1.42 billion by FY 2027.[177] Congress should fully appropriate these sums as envisioned over the next five years.

Further, the CHIPS and Science Act directs NSF to collect information and regularly publish a report articulating the computational needs of NSF-funded projects. It further directs NSF to develop and regularly update an advanced computing roadmap and initiate a secure computing enclave pilot program to assist universities in ensuring the security of data resulting from federally supported research.[178] Congress should ensure funding is provided for these initiatives at NSF, and NSF should enact them with alacrity. Similarly, NSF has established a Regional Innovation Engines (RIE) program whose mission is to harness the nation’s science and technology research and catalyze and foster innovation ecosystems across the United States.[179] Here, the RIE program should have access to regional supercomputing resources. It should also prioritize partnerships with industry as much as with academics and national laboratories, and also be attuned to industry concerns such as protecting intellectual property rights, clarifying data rights, and ensuring industries’ ability to participate in research programs.

Increase funding for DOE NNSA’s Advanced Simulation and Computing (ASC) program at a similar level to the ASCR increases. ASC and ASCR jointly operate the Exascale Computing Initiative. However, because ASC is authorized through the National Defense Authorization Act (NDAA), it was not part of the CHIPS and Science Act. Therefore, in the next NDAA, Congress should further support advanced computing by increasing ASC funding 40 percent over the ensuing five years to match the investment increases in ASCR.

Ensure that newly created regional technology and innovation hubs connect to and invest in supercomputing resources in pursuit of their mission. The CHIPS and Science Act authorizes $10 billion over five years to create 20 geographically distributed “regional technology and innovation hubs” in areas that aren’t currently leading technology centers, a proposal originally initiated by the Information Technology and Innovation Foundation (ITIF).[180] The program will use a merit-based competitive process to bring together consortia of local and state governments, universities, industry, labor organizations, and other stakeholders to promote innovation capacity within selected regions.[181]

Making the necessary investments in HPC systems would represent an incubation opportunity to spur deeper partnerships and innovation between industry, academia, and the public sector. In particular, Congress should appropriate monies that were authorized to the National Science Foundation and its new Technology, Innovation, and Partnerships Directorate (TIP) to fund regional supercomputing centers and use HPC to underpin this effort across all industry sectors. Currently a disconnect exists between academia and industry. The purpose of the TIP Directorate is to jumpstart stronger public-private partnerships by connecting government, academia, and industry through investment in critical technologies such as advanced computing, AI, 5G, and beyond. Proposed investments could transform academic-led research into successful commercial products through strong, effective government-academic-industry partnerships, which could result in enhanced U.S. scientific leadership, economic competitiveness, and improvements to Americans’ overall health and social and economic wellbeing.

Any HPC-focused export controls should consider foreign availability and be aligned with controls introduced by allied nations to the greatest extent possible. In general, U.S. export controls should be regularly updated to reflect the global state of play in semiconductor and HPC industries, such that controls do not preclude U.S. enterprises’ ability to sell goods that are on a technical par with commercially available goods and services from foreign competitors. Further, any emerging technologies that are ultimately deemed to meet the statutory standards for export controls should be designated as such only in cases of exclusive development and availability within the U.S. market—and the controls should be removed if and when that exclusivity no longer exists. Lastly, the United States should eschew the application of unilateral export controls and seek to develop a more ambitious and effective plurilateral approach to promulgate export controls in advanced-technology industries like HPC and semiconductors among like-minded nations.[182]

Graduate more computer scientists and electrical engineering students and bolster America’s STEM pipeline. As Berardino Baratta, CEO of MxD, the Digital Manufacturing & Cybersecurity Institute, in Chicago, Illinois, explained, “America builds all the technology it needs, but without an adequately skilled workforce to utilize it, ultimately we may not succeed.”[183] Yet, the U.S. domestic STEM talent pipeline is significantly underdeveloped. For instance, one study estimates that, from 2014 to 2024, U.S. annual computer/IT college graduates of about 60,000 will be 40,000 graduates short of annual U.S. labor market needs.[184] In other words, it foresees a gap of 400,000 computer science/IT graduates over that decadal period. Meanwhile, 81 percent of full-time graduate students in U.S. electrical engineering programs, and 79 percent in computer science, are international students.[185] While it’s great that U.S. universities still attract the world’s best and brightest, it’s imperative the United States create pathways for these students to stay in America after they graduate, which is why ITIF has called for stapling a green card to the diplomas of foreign-born students graduating from U.S. universities in STEM fields. The United States should also double the number of STEM-focused high schools in the United States, ensure that computer science is taught in all U.S. high schools, and establish an incentive program for universities to expand their computer science offerings.[186]

Conclusion

HPC represents an essential strategic national capability for the United States, and it’s imperative the United States continue to stand at the leading edge of both HPC systems development and their meaningful use and application. As with other high-tech sectors of the U.S. economy, from biotechnology to semiconductors, such leadership requires continual stewardship and investment.[187]


Acknowledgments

This report was made possible in part by generous support from HPE. ITIF maintains complete editorial independence in all its work. All opinions, findings, and recommendations are ITIF’s and do not necessarily reflect the views of its supporters. The author would like to thank Robert Atkinson and Ian Clay for their editorial assistance with this report. Any errors or omissions are the author’s responsibility alone.

About the Author

Stephen Ezell is vice president for global innovation policy at ITIF and director of ITIF’s Center for Life Sciences Innovation. He also leads the Global Trade and Innovation Policy Alliance. His areas of expertise include science and technology policy, international competitiveness, trade, and manufacturing. He is the coauthor of Innovating in a Service-Driven Economy: Insights, Application, and Practice (Palgrave Macmillan, 2015) and Innovation Economics: The Race for Global Advantage (Yale, 2012).

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

For more information, visit us at www.itif.org.

Endnotes

[1].     Stephen Ezell and Robert D. Atkinson, “The Vital Importance of High-Performance Computing to U.S. Competitiveness” (ITIF, April 2016), https://itif.org/publications/2016/04/28/vital-importance-high-performance-computing-us-competitiveness/.

[2].     U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy (DOE EERE), “High Performance Computing for Manufacturing: Using Supercomputers to Improve Energy Efficiency and Performance” (DOE EERE, May 2021), https://hpc4energyinnovation.llnl.gov/sites/hpc4energyinnovation/files/2022-03/HPC_Manufacturing_Brochure_MAY_13_2021.pdf.

[3].     U.S. Department of Energy, “At the Frontier: DOE Supercomputing Launches the Exascale Era,” news release, June 7, 2022, https://www.energy.gov/science/articles/frontier-doe-supercomputing-launches-exascale-era. Note: China is believed to have achieved exascale-level supercomputing on two supercomputers as of 2021, but it did not submit these machines for official inclusion on the top 500 list. See: Nicole Hemsoth, “China Has Already Reached Exascale-On Two Separate Systems,” The Next Platform, October 26, 2021, https://www.nextplatform.com/2021/10/26/china-has-already-reached-exascale-on-two-separate-systems/.

[4].     Top500.org, “The List: June 2022,” https://top500.org/lists/top500/list/2022/06/.

[5].     Ibid.

[6].     Kristin Houser, “US’s Frontier is the world’s first exascale supercomputer,” FreeThink, June 4, 2022, https://www.freethink.com/technology/fastest-supercomputer.

[7].     Ryan Smith, “Intel’s Aurora Supercomputer Now Expected to Exceed 2 ExaFLOPS Performance,” AnandTech, October 27, 2021, https://www.anandtech.com/show/17037/aurora-supercomputer-now-expected-to-exceed-2-exaflops-performance.

[8].     Kristin Houser, “This $600 Million Supercomputer Will Manage the U.S.’s Nukes,” The Byte, August 13, 2019, https://futurism.com/the-byte/nuclear-stockpile-supercomputer-el-capitan.

[9].     Photo courtesy of Oak Ridge National Laboratory. Oak Ridge National Laboratory, “Frontier supercomputer debuts as world’s fastest, breaking exascale barrier,” news release, May 30, 2022, https://www.ornl.gov/news/frontier-supercomputer-debuts-worlds-fastest-breaking-exascale-barrier.

[11].   Ezell and Atkinson, ““The Vital Importance of High-Performance Computing to U.S. Competitiveness,” 7.

[12].   Dr. Mark Seager, “Innovation of HPC and as a Driver for Economic Benefit” (presentation, Washington, D.C., February 2010).

[13].   Dr. Mark Seager, phone interview by Stephen Ezell, ITIF, March 2, 2016.

[14].   U.S. Department of Energy and National Nuclear Security Administration (NNSA), “Overview of the Exascale Computing Project,” https://www.exascaleproject.org/about/.

[15].   Stephen Ezell, conversation with Rick Arthur, senior director for advanced computational methods research at GE Research, August 25, 2022.

[16].   DOE EERE, “High Performance Computing for Manufacturing: Using Supercomputers to Improve Energy Efficiency and Performance,” 22.

[17].   Stephen Ezell, conversation with Rick Arthur, senior director for advanced computational methods research at GE Research, August 25, 2022.

[18].   DOE EERE, “High Performance Computing for Manufacturing: Using Supercomputers to Improve Energy Efficiency and Performance,” 21.

[19].   Ibid.

[20].   Ibid., 3.

[21].   Ibid., 3.

[22].   Ezell and Atkinson, “The Vital Importance of High-Performance Computing to U.S. Competitiveness,” 9.

[23].   Nossokoff, Sorensen, and Joseph, “To Out-compute is to Out-compete,” 3.

[24].   DOE EERE, “High Performance Computing for Manufacturing,” 13.

[25].   Ibid.

[26].   Image courtesy of Rick Arthur, GE Research.

[27].   Jacques Bughin, “Notes From the AI Frontier: Modeling the Impact of AI on the World Economy” (McKinsey & Company, December 2018), https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy.

[28].   Daniel Castro and Joshua New, “The Promise of Artificial Intelligence” (Center for Data Innovation, October 2016), 2, http://www2.datainnovation.org/2016-promise-of-ai.pdf.

[29].   Los Alamos National Laboratory, “High-performance computing makes national security possible,” January 20, 2022, https://www.facebook.com/watch/?v=3132555510289643.

[30].   Nossokoff, Sorensen, and Joseph, “To Out-compute is to Out-compete,” 9.

[31].   Ibid., 9.

[32].   Stephen Ezell conversation with Bob Sorensen, Hyperion Research, August 16, 2022; Stephen McBride, “Edge Computing Is Leading the Next Great Tech Revolution,” Forbes, November 16, 2020, https://www.forbes.com/sites/stephenmcbride1/2020/11/16/edge-computing-is-leading-the-next-great-tech-revolution.

[33].   Earl Joseph et al., “The Economic and Societal Benefits of Linux Supercomputers” (Hyperion Research, April 2022), 1, https://davidbader.net/publication/2022-hyperionresearch/2022-HyperionResearch.pdf.

[34].   Ibid., 7.

[35].   Nossokoff, Sorensen, and Joseph, “To Out-compute is to Out-compete,” 3.

[36].   Joseph et al., “The Economic and Societal Benefits of Linux Supercomputers,” 3.

[37].   Ibid., 8–9.

[38].   Craig A. Stewart et al., “Return on Investment for Three Cyberinfrastructure Facilities: A Local Campus Supercomputer, the NSF-Funded Jetstream Cloud System, and XSEDE (the eXtreme Science and Engineering Discovery Environment),” 2018 IEEE/ACM 11th International Conference on Utility and Cloud Computing (UCC) (2018): 223–236, https://ieeexplore.ieee.org/document/8603169.

[39].   National Center for Supercomputing Applications, University of Illinois Urbana-Champaign, “NCSA’s Blue Waters project provides $1.08 billion direct return to Illinois’ economy,” news release, May 10, 2017, https://www.ncsa.illinois.edu/archive/ncsas-blue-waters-project-provides-1-08-billion-direct-return-to-illinois-e/.

[40].   Ibid.

[41].   Joseph et al., “The Economic and Societal Benefits of Linux Supercomputers.”

[42].   Earl Joseph et al., “Hyperion Research ISC22 Market Update” (Hyperion Research, May 2022), https://hyperionresearch.com/wp-content/uploads/2022/06/Hyperion-Research_ISC22-Market-Update_May-30-2022_Combined.pdf.

[43].   Don Clark, “U.S. Retakes Top Spot in Supercomputer Race,” The New York Times, May 31, 2022, https://www.nytimes.com/2022/05/30/business/us-supercomputer-frontier.html.

[44].   Nossokoff, Sorensen, and Joseph, “To Out-compute is to Out-compete,” 3.

[45].   Clark, “U.S. Retakes Top Spot in Supercomputer Race.”

[46].   Top500.org, “The List,” https://top500.org/lists/top500/list/2022/06/.

[47].   Top500.org, “List Statistics” (Countries/Regions, November lists 2010 through 2020), accessed August 9, 2022, https://top500.org/statistics/list/.

[48].   Tiffany Trader, “China’s Supercomputing Strategy Called Out,” HPC Wire, July 17, 2014, http://www.hpcwire.com/2014/07/17/dd/.

[49].   Thesigers, “High Performance Computing in US-China Relations” Sovereign Data 1, no. 3 (September 2015): 1, https://pure.royalholloway.ac.uk/portal/files/25502946/SD_2015_09_01_PUBLIC_.pdf.

[50].   Ezell, “The Vital Importance of High-Performance Computing to U.S. Competitiveness,” 38.

[51].   E4S, “The Extreme-scale Scientific Software Stack,” https://e4s-project.github.io/.

[52].   National Center for Supercomputing Applications, “About Us,” https://www.ncsa.illinois.edu/about/.

[53].   Stephen Ezell, phone interview with Bob Sorensen, senior vice president of research, Hyperion Research, August 16, 2022.

[54].   Joseph et al., “Hyperion Research ISC22 Market Update,” 10.

[55].   Joseph et al., “Hyperion Research ISC22 Market Update,” 10.

[56].   “Global Supercomputers Market to Reach US$14 Billion by the Year 2026,” Global Newswire, January 20, 2022, https://www.globenewswire.com/news-release/2022/01/20/2370042/0/en/Global-Supercomputers-Market-to-Reach-US-14-Billion-by-the-Year-2026.html.

[57].   Michael Garrett, “Testimony of Michael Garrett, Director, Airplane Performance, Boeing Commercial Airplanes,” Testimony Before the United State Senate Committee on Commerce, Science, and Transportation, Subcommittee on Technology, Innovation, and Competitiveness, July 19, 2006, https://www.commerce.senate.gov/services/files/CDB03F1E-6BB3-4B23-87DA-AEDDFC2137F5.

[58].   Jim Glidewell, “Boeing, HPC, and PBS Pro -- Twelve Years and Counting,” October 2, 2012, https://www.youtube.com/watch?v=LHpmtEh-Elk.

[59].   The Council on Competitiveness, “Case Study: Boeing Catches a Lift with High Performance Computing” (The Council on Competitiveness, 2009), 3, http://science.energy.gov/~/media/ascr/pdf/benefits/Hpc_boeing_072809_a.pdf; Earl C. Joseph, Chirag Dekate, and Steve Conway, “Real-World Examples of Supercomputers Used for Economic and Societal Benefits: A Prelude to What the Exascale Era Can Provide” (IDC, May 2014), 20, http://casc.org/wp-content/uploads/2014/07/IDCReportRealWorldExamplesOfBenefitsOfSupercomputers.pdf.

[60].   Hugh Morris, “Airline weight reduction to save fuel: The crazy ways airlines save weight on planes,” Traveller, September 4, 2018, https://www.traveller.com.au/airline-weight-reduction-to-save-fuel-the-crazy-ways-airlines-save-weight-on-planes-h14vlh.

[62].   Ibid.

[63].   Exascale Computing Project, “Exascale Computing and the Impact to The Boeing Company,” October 11, 2020, https://www.youtube.com/watch?v=k7wiKvg7N4E.

[64].   J. Slotnick et al. “CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences” (NASA, March 1, 2014), 1, https://ntrs.nasa.gov/citations/20140003093.

[65].   Ibid., 20.

[66].   Exascale Computing Project, “Exascale Computing and the Impact to The Boeing Company.”

[67].   Garrett, “Testimony of Michael Garrett, Director, Airplane Performance, Boeing Commercial Airplanes,” 7.

[68].   Exascale Computing Project, “Exascale Computing and the Impact to The Boeing Company”; Dassault Systèmes, Spatial Corp., “What Is Mesh Generation,” https://www.spatial.com/resources/glossary/what-is-meshing.

[69].   Garrett Reim, “USAF’s digitally engineered aircraft to receive ‘e’ prefix, starting with Boeing eT-7A,” Flight Global, September 14, 2020, https://www.flightglobal.com/fixed-wing/usafs-digitally-engineered-aircraft-to-receive-e-prefix-starting-with-boeing-et-7a/140165.article.

[70].   Exascale Computing Project, “Exascale Computing and the Impact to The Boeing Company.”

[71].   Reim, “USAF’s digitally engineered aircraft to receive ‘e’ prefix, starting with Boeing eT-7A.”

[72].   Oliver Peckham, “GE Research Enters the Exascale Era,” HPC Wire, July 28, 2022, https://www.hpcwire.com/2022/07/28/ge-research-enters-the-exascale-era/.

[73].   Photo courtesy of GE; Richard Arthur, “GE Collaborations With DOE at Exascale” (presentation at USDOE/Office of Science - Advanced Scientific Computing Research Advisory Committee - July 2022 Meeting - Day One), https://www.youtube.com/watch?v=tdMQ3VLcVno&t=7953s.

[74].   Peckham, “GE Research Enters the Exascale Era”; Arthur, “GE Collaborations With DOE at Exascale.”

[75].   Arthur, “GE Collaborations With DOE at Exascale.”

[76].  Ibid.

[77].   Earl Joseph, Steve Conway, and Bob Sorensen, “Real-World Examples of Supercomputers Used for Economic and Societal Benefits: A Prelude to What the Exascale Era Can Provide” (Hyperion Research, March 2017), https://www.hpcuserforum.com/wp-content/uploads/2022/02/Hyperion-Research-Benefits-of-Supercomputers_2017.pdf.

[78].   Ibid.

[79].   DOE EERE, “High Performance Computing for Manufacturing,” 42.

[80].   Ibid.

[81].   Alyssa Altman, “CASE mobility reliant on high performance computing,” Automotive World, March 29, 2021, https://www.automotiveworld.com/articles/case-mobility-reliant-on-high-performance-computing/.

[82].   Joseph et al., “The Economic and Societal Benefits of Linux Supercomputers,” 13.

[83].   Oak Ridge National Laboratory, “ORNL’s Jaguar Helps BMI Win Award, Nation Save Fuel,” news release, February 9, 2011, https://www.newswise.com/articles/ornl-s-jaguar-helps-bmi-win-award-nation-save-fuel.

[84].   Ibid.

[85].   Ibid.

[86].   Altman, “CASE mobility reliant on high performance computing.”

[87].   Colin Cunliff, Ashley Johnson, and Hodan Omaar, “How Congress and the Biden Administration Could Jumpstart Smart Cities With AI” (ITIF, March 2021), https://itif.org/publications/2021/03/01/how-congress-and-biden-administration-could-jumpstart-smart-cities-ai/.

[88].   Altman, “CASE mobility reliant on high performance computing.”

[89].   University of Michigan, Robotics, “Autonomous & Connected Vehicles,” https://robotics.umich.edu/research/focus-areas/autonomous-connected-vehicles/.

[90].   Ezell and Atkinson, “The Vital Importance of High-Performance Computing to U.S. Competitiveness,” 19–20.

[91].   Stephen Ezell, conversation with Alison Main, senior director R&D, P&G, August 12, 2022.

[92].   Ibid.

[93].   Oak Ridge National Laboratory, “Oak Ridge Supercomputer Turns the Tide for Consumer Products Research,” news release, August 15, 2014, https://www.olcf.ornl.gov/2014/08/25/oak-ridge-supercomputer-turns-the-tide-for-consumer-products-research/.

[94].   DOE EERE, “High Performance Computing for Manufacturing,” 34.

[95].   Ibid.

[96].   Michael Feldman, “Procter & Gamble Turns to Supercomputing for Paper Product Design,” Top500.org, October 9, 2017, https://www.top500.org/news/procter-and-gamble-turns-to-supercomputing-for-paper-product-design/.

[97].   DOE EERE, “High Performance Computing for Manufacturing,” 34.

[98].   Stephen Ezell, conversation with Alison Main, senior director R&D, P&G, August 12, 2022.

[99].   W.F. Drew Bennett et al., “Converting Coarse Grained Molecular Dynamics Structures to Atomistic Resolution for Multiscale Modelling” (forthcoming paper submitted to the Journal of Chemical Theory and Computation).

[100]. W.F. Drew Bennett et al., “Bacterial Membranes Are More Perturbed by the Asymmetric Versus Symmetric Loading of Amphiphilic Molecules” Membranes Vol. 12, Issue 4 (2022), 350, https://doi.org/10.3390/membranes12040350.

[101]. Stephen Ezell, conversation with Alison Main, senior director R&D, P&G, August 12, 2022.

[102]. United States Government Accountability Office (GAO), “Artificial Intelligence in Healthcare: Benefits and Challenges of Machine Learning in Drug Development” (GAO, December 2019), 56, https://www.gao.gov/assets/gao-20-215sp.pdf.

[103]. Oliver Peckham, “Global Supercomputing Is Mobilizing Against COVID-19,” HPC Wire, March 12, 2020, https://www.hpcwire.com/2020/03/12/global-supercomputing-is-mobilizing-against-covid-19/.

[104]. Representative Chuck Fleischmann, “America must continue to advance high-performance computing,” The Hill, June 25, 2020, https://thehill.com/blogs/congress-blog/technology/504570-america-must-continue-to-advance-high-performance-computing/; Oak Ridge National Laboratory, “ORNL is in the fight against COVID-19,” news release, April 15, 2020, https://www.ornl.gov/news/ornl-fight-against-covid-19.

[105]. Ibid.

[106]. James Brase et al., “The COVID-19 High Performance Computing Consortium” Computing in Science & Engineering Vol. 24, No. 1 (Jan.–Feb. 2022): 78–85, https://ieeexplore.ieee.org/document/9734778. See also version of report at: https://s3.us-south.cloud-object-storage.appdomain.cloud/covid-19-hpc-object-storage-production/Consortium_Overview_Paper_03_2022_1f72939a70.

[107]. “The COVID-19 High Performance Computing Consortium,” COVID-19 HPC Consortium, https://covid19-hpc-consortium.org/.

[108]. Brase et al., “The COVID-19 High Performance Computing Consortium,” 8.

[109]. Ibid.

[110]. Ibid.

[111]. Oliver Peckham, “Pfizer Discusses Use of Supercomputing and AI for Covid Drug Development,” HPC Wire, March 24, 2022, https://www.hpcwire.com/2022/03/24/pfizer-discusses-use-of-supercomputing-and-ai-for-covid-drug-development/.

[112]. Stephen Ezell, phone call with Vassilios Pantazopoulos, head of scientific computing and HPC, Pfizer, September 8, 2022.

[113]. Peckham, “Pfizer Discusses Use of Supercomputing and AI for Covid Drug Development.”

[114]. Davide Ravera, “Pfizer: New Potential Billion-Dollar Drugs Under Development,” Seeking Alpha, July 9, 2022, https://seekingalpha.com/article/4522538-pfizer-new-potential-billion-dollar-drugs-under-development.

[115]. Subcommittee on Future Advanced Computing Ecosystem, Committee on Technology of the National Science and Technology Council (NSTC), “Pioneering the Future Advanced Computing Ecosystem: A Strategic Plan” (NSTC, November 2020), 1, https://www.nitrd.gov/pubs/Future-Advanced-Computing-Ecosystem-Strategic-Plan-Nov-2020.pdf.

[116]. Tanmoy Bhattacharya et al., “AI Meets Exascale Computing: Advancing Cancer Research With Large-Scale High Performance Computing” Frontiers in Oncology Vol. 9, No. 984 (October 2019), 2, https://www.frontiersin.org/articles/10.3389/fonc.2019.00984/full.

[117]. Ibid., 4.

[118]. Ibid.

[119]. NSTC, “Pioneering the Future Advanced Computing Ecosystem: A Strategic Plan,” 1.

[120]. Bhattacharya et al., “AI Meets Exascale Computing,” 4.

[121]. Ibid.

[122]. Ibid.

[123]. Ibid.

[124]. Ibid., 5.

[125]. Ibid.

[126]. Adams B. Nager and Robert D. Atkinson, “A Trillion-Dollar Opportunity: How Brain Research Can Drive Health and Prosperity” (ITIF, July 2016), https://www2.itif.org/2016-trillion-dollar-opportunity.pdf.

[127]. The Alzheimer’s Association, “Changing the Trajectory of Alzheimer’s Disease” (2015), http://www.alz.org/documents_custom/trajectory.pdf.

[128]. Joseph, Conway, and Sorensen, “Real-World Examples of Supercomputers Used for Economic and Societal Benefits,” 5–6.

[129]. Kimberly Mann Bruch, “Supercomputers Help Accelerate Alzheimer’s Research,” news release, UC San Diego, March 16, 2021, https://ucsdnews.ucsd.edu/pressrelease/supercomputers-help-accelerate-alzheimers-research.

[130]. Ibid.

[131]. Mark Johnson and Kathleen Gallagher, “One in a billion: A 4-year-old is plagued by a mysterious, relentless disease. His genome might hold clues,” Milwaukee Journal Sentinel, September 29, 2010, https://www.jsonline.com/story/archives/2020/09/29/one-billion-baffling-illness/1366924001/.

[132]. Mark Johnson and Kathleen Gallagher, “One in a billion: Researchers seek clues in Nicholas' DNA—and find more than they expected,” Milwaukee Journal Sentinel, September 29, 2010, https://www.jsonline.com/story/archives/2020/09/29/one-billion-sifting-through-dna-haystack/1367030001/.

[133]. Mark Johnson and Kathleen Gallagher, “Sifting through the DNA haystack,” Milwaukee Journal Sentinel, December 21, 2010.

[134]. Mark Johnson and Kathleen Gallagher, “One in a billion: Armed with a mysterious answer, Nicholas' doctors and parents weigh a risky treatment,” Milwaukee Journal Sentinel, September 29, 2010, https://www.jsonline.com/story/archives/2020/09/30/armed-with-a-mysterious-answer-nicholas-doctors-and-parents-weigh-a-risky-treatment/1367092001/.

[135]. The University of Texas at Austin, “Supercomputing Helps Deepen Understanding of Life,” news release, May 15, 2015, https://cns.utexas.edu/news/supercomputing-helps-deepen-understanding-of-life.

[136]. Ibid.; Illumina, “Innovation at Illumina: The road to the $600 human genome,” Nature, https://www.nature.com/articles/d42473-021-00030-9.

[137]. Exascale Computing Project, “About ExaWind,” https://www.exascaleproject.org/research-project/exawind/.

[138]. Ibid.

[139]. Stephen Ezell, conversation with Rick Arthur, senior director for advanced computational methods research at GE Research, August 25, 2022.

[140]. Exascale Computing Project, “About ExaWind.”

[141]. GE, “GE Research Uses Summit Supercomputer for Groundbreaking Study on Wind Power,” news release, August 5, 2020, https://www.ge.com/news/press-releases/ge-research-uses-summit-supercomputer-groundbreaking-study-wind-power.

[142]. Ibid.

[143]. Ibid.

[144]. Ibid.

[145]. Arthur, “GE Collaborations With DOE at Exascale.”

[146]. Tess Boissonneault, “GE Research gains access to ORNL supercomputer to optimize jet engines,” 3D Printing Media Network, February 20, 2020, https://www.3dprintingmedia.network/ge-research-ornl-supercomputer-jet-engine-efficiency.

[147]. Ibid.

[148]. Arthur, “GE Collaborations With DOE at Exascale.”

[149]. Exascale Computing Project, “About ExaSGD,” https://www.exascaleproject.org/research-project/exasgd/.

[150]. Francis Alexander et al., “Exascale applications: skin in the game” Philosophical Transactions Royal Society Journal Vol. 378 (January 2020), 22, https://royalsocietypublishing.org/doi/10.1098/rsta.2019.0056.

[151]. Mihai Anitescu, “Making a Smart Electric Power Grid,” Argonne National Laboratory, May 2, 2016, https://www.anl.gov/mcs/article/making-a-smart-electric-power-grid.

[152]. Meredith Roaten, “Lab Powers Up to Plug In Next-Gen Supercomputers,” National Defense, July 26, 2022, https://www.nationaldefensemagazine.org/articles/2022/7/26/lab-powers-up-to-plug-in-next-gen-supercomputers.

[153]. Ibid.

[154]. David E. Hoffman, “Supercomputers Offer Tools for Nuclear Testing—and Solving Nuclear Mysteries,” The Washington Post, November 1, 2011, https://www.washingtonpost.com/national/national-security/supercomputers-offer-tools-for-nuclear-testing--and-solving-nuclear-mysteries/2011/10/03/gIQAjnngdM_story.html.

[155]. Ibid.

[156]. Roaten, “Lab Powers Up to Plug In Next-Gen Supercomputers.”

[157]. National Space and Aeronautics Administration (NASA), “Supercomputing the Climate,” June 2, 2010, https://svs.gsfc.nasa.gov/vis/a010000/a010500/a010563/index.html.

[158]. Stephen Ezell conversation with Ilene Carpenter, HPE Earth sciences segment manager, August 12, 2022.

[159]. National Oceanic and Atmospheric Administration Science Advisory Board, “A Report on Priorities for Weather Research” (NOAA Science Advisory Board, December 2021), 6, https://sab.noaa.gov/wp-content/uploads/2021/12/PWR-Report_Final_12-9-21.pdf.

[160]. Ibid., 72.

[161]. National Oceanic and Atmospheric Administration (NOAA), “U.S. supercomputers for weather and climate forecasts get major bump,” news release, June 28, 2022, https://www.noaa.gov/news-release/us-supercomputers-for-weather-and-climate-forecasts-get-major-bump.

[162]. Ibid.

[163]. Ibid.

[164]. NOAA Science Advisory Board, “A Report on Priorities for Weather Research,” 5.

[165]. Linda Poon, “D.C.’s ‘Historic’ Flash Flood May Soon Be Normal,” Bloomberg, July 10, 2019, https://www.bloomberg.com/news/articles/2019-07-10/lessons-from-a-historic-storm-that-flooded-d-c.

[166]. Ibid.

[167]. José Graziano Da Silva, “Feeding the World Sustainably,” UN Chronicle No. 1 & 2, Vol. XLIX (June 2012), https://www.un.org/en/chronicle/article/feeding-world-sustainably.

[168]. “Accelerating global agricultural productivity growth is critical,” Science Daily, October 16, 2019, https://www.sciencedaily.com/releases/2019/10/191016074750.htm.

[169]. Mohammad Ayoub Khan, Rijwan Khan, and Mohammad Aslam Ansari, “Chapter 6: Intelligent farming system through weather forecast support and crop production” in Application of Machine Learning in Agriculture (Academic Press, 2022): 113–130, https://www.sciencedirect.com/science/article/pii/B9780323905503000096.

[170]. University of Texas at Austin (UT), Texas Advanced Computing Center, “Supercomputers help scientists improve seismic forecasts for California,” Science Daily, October 24, 2017, https://www.sciencedaily.com/releases/2017/10/171024133733.htm.

[171]. Los Alamos National Laboratory, “High-performance computing makes national security possible.”

[172]. UT Texas Advanced Computing Center, “Supercomputers help scientists improve seismic forecasts for California.”

[173]. Ibid.

[174]. Margaret Crable, “Computer wizardry gives earthquake researchers deeper insight into big quakes and the motion they generate,” USC Dornsife College of Letters, Arts and Sciences, January 10, 2022, https://dornsife.usc.edu/news/stories/3614/high-tech-earthquake-research/; SCEC, “3D Simulation of Hypothetical M7.6 Earthquake, Rupture Propagation and Wavefield at Surface,” https://www.youtube.com/watch?v=_belQwGNolY.

[175]. Joseph et al., “The Economic and Societal Benefits of Linux Supercomputers,” 13.

[176]. Stephen Ezell and Stefan Koester, “Three Cheers for the CHIPS and Science Act of 2022! Now, Let’s Get Back to Work,” The Innovation Files, July 29, 2022, https://itif.org/publications/2022/07/29/three-cheers-for-the-chips-and-science-act-of-2022-now-lets-get-back-to-work/.

[178]. Ibid., 21.

[179]. National Science Foundation, “Regional Innovation Engines,” https://beta.nsf.gov/funding/initiatives/regional-innovation-engines.

[180]. Robert D. Atkinson, Mark Muro, and Jacob Whiton, “The Case for Growth Centers: How to Spread Tech Innovation Across America” (ITIF, December 2019), https://itif.org/publications/2019/12/09/case-growth-centers-how-spread-tech-innovation-across-america/.

[181]. Stephen Ezell and Stefan Koester, “Three Cheers for the CHIPS and Science Act of 2022! Now, Let’s Get Back to Work,” Innovation Files, July 29, 2022, https://itif.org/publications/2022/07/29/three-cheers-for-the-chips-and-science-act-of-2022-now-lets-get-back-to-work/.

[182]. Stephen Ezell, “An Allied Approach to Semiconductor Leadership” (ITIF, September 2020), https://itif.org/publications/2020/09/17/allied-approach-semiconductor-leadership/.

[183]. Stephen Ezell, interview with Berardino Baratta, CEO of MxD, the Digital Manufacturing & Cybersecurity Institute, August 29, 2022.

[184]. Mark Muro, “Get with the Program: Digitalizing America’s Advanced Manufacturing Sector” (power point presentation, Investing in Manufacturing Communities Partnership Summit, Washington, D.C., February 8, 2018), 8.

[185]. Elizabeth Redden, “Foreign Students and Graduate STEM Enrollment,” Higher Ed, October 11, 2017, https://www.insidehighered.com/quicktakes/2017/10/11/foreign-students-and-graduate-stem-enrollment.

[186]. Stephen Ezell, “Assessing the State of Digital Skills in the U.S. Economy” (ITIF, November 2021), https://itif.org/publications/2021/11/29/assessing-state-digital-skills-us-economy/.

[187]. Stephen Ezell, “Going, Going, Gone? To Stay Competitive in Biopharmaceuticals, America Must Learn From Its Semiconductor Mistakes” (ITIF, November 2021), https://itif.org/publications/2021/11/22/going-going-gone-stay-competitive-biopharmaceuticals-america-must-learn-its/.

Back to Top