Solving the Surveyor’s Dilemma: Estimating Future Returns From Innovation Program Investments

Robin Gaster May 4, 2020
May 4, 2020
As policymakers consider the value of small business innovation programs, they should understand that their returns on investment are considerably larger than conventional estimates indicate.

Introduction

Measuring ROI With Surveys Creates a “Snapshot” Problem

New Data Offers a Better View

What the New Data Implies

Endnotes

Introduction

In the United States and elsewhere, government programs have emerged to fill gaps left in the innovation ecology by market failures. In particular, small, innovative businesses may find that their projects do not fit the profile or the timelines required for venture capitalist funding. Their markets may be too small or too difficult to conquer, or the expected time to payback and success may be too long.

Support from programs like the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs plays an important role in filling these financing gaps.[1] Offering more than $3 billion annually in nondilutive finding for small businesses, they have funded some notable successes, including Qualcomm, iRobot, and Illumina. Thousands of less-known companies have benefited as well.

Understandably, Congress has tried to make sure that funding is being well spent. A long series of National Academies reports generally established that the programs are well-run and that they offer significant benefits to companies, as well as a return on the government’s investment in the form of new commercial revenues. Those revenues then turn into new jobs and further company growth—sometimes very substantial and long-lasting growth.

Measuring ROI With Surveys Creates a “Snapshot” Problem

Because companies do not report back to agency funders on a consistent basis and over a long enough period of time, the preferred methodology of assessing outcomes is through wide-scale surveys. This methodology was pioneered by the Government Accountability Office (GAO), adopted and enhanced by the National Academies, and most widely deployed by TechLink in a series of surveys for the Department of Defense and its agencies.

Surveys are undoubtedly the best mechanism for identifying program outcomes in detail, although they can also be supplemented by other approaches—for example, using administrative data from the Census Bureau. But surveys have a number of flaws, which are described in the National Academies studies.[2]

Surveys measuring outcomes for innovation programs like SBIR/STTR face one particularly challenging problem. Surveys are essentially a snapshot. They capture outcomes at the time of the survey. What they do not capture is outcomes that occur after the survey ends. For some kinds of programs, that’s not a problem. But it’s a major challenge for evaluating innovation programs, where the time to market is often long, and the lag even after that can be substantial before a product cycle peaks. In fact, it can be decades before a product is completely outmoded. As a result, innovation outcome surveys substantially under-report total commercialization, but to an unknown degree.

New Data Offers a Better View

Major studies have acknowledged this problem, but they have nonetheless persisted in using surveys to assess outcomes in the absence of better data. Thus the National Academies, for example, clearly identified this problem—before going on to use those results as the best available data.[3]

However, a set of survey data collected by TechLink for DOD in 2017–2018 offers a new opportunity to address this problem. The size and extent of the new data makes it possible to generate statistically sound estimates for future project outcomes, even though those outcomes have not yet arrived. As a result, we can generate at least a preliminary estimate of the extent to which surveys under-represent commercial outcomes for SBIR/STTR, and by extension for similar innovation programs as well. That research is presented more fully in a recent paper and is summarized here.[4]

Innovation outcome surveys substantially under-report total commercialization, but to an unknown degree.

The TechLink surveys used substantially enhanced resources from DOD and its agencies to build a data set that encompassed almost all Phase II SBIR/STTR projects funded by the U.S. Navy and Air Force between 2000 and 2013. TechLink acquired about 6,700 survey responses against a universe of 7,216 funded projects—a response rate of 93 percent.

This large sample made it possible to develop a picture of the revenue life cycle for projects. Of course, even such a large sample must still be treated with some caution—the results below may need adjustment as additional data becomes available. Nonetheless, developing a preliminary estimate for median project outcomes over the product life cycle is an important step forward, because it allows us to develop estimates for the future revenues to be generated by products that had not completed their life cycles at the date the survey was administered.

The missing revenues emerge from two distinct sources. First, there are projects that have not exhausted their life cycle at the time of the survey. Indeed, some may have only just reached the market, and are still in the growth phase typical for a recent market entrant. Others may be older but still generating substantial revenue growth. Second, there are projects that have not reached the market at all, but that will do so at some point after the date of the survey.

To estimate missing sales for projects that were funded more recently and have already reached the market, we developed an estimate for median sales for each elapsed year after the conclusion of the SBIR/STTR award. Unsurprisingly, we found that median returns from older projects were indeed much larger than those from more recent projects. In fact, commercial revenues stemming from Air Force and Navy SBIR/STTR contracts completed 16 years before the date of the survey at the median generated more than five times as much revenue as products from contracts ending three years before. The predicted median sales per project with sales, derived from our regression model, are shown in figure 1.

Figure 1: Predicted sales by elapsed year, estimated from regression model

 

Using this analysis as a baseline, we could then estimate the percentage of total revenues not captured by the survey, by elapsed year. For projects surveyed three years after the conclusion of their SBIR/STTR awards, 72.8 percent of revenues were not captured by the survey (see figure 2).[5]

Figure 2: Percentage of total estimated sales not captured by survey (estimated from regression model)

Applying these estimates to the data collected by TechLink, we can estimate how much of the total return from the program was not captured by the survey data. Overall, across 2000–2013, we estimated about $13.1 billion in company revenues was not captured, amounting to 44 percent of estimated total revenues over the product life cycle.[6]

Turning now to the second source of missing data, we used a similar approach to estimate the share of projects that had not reached the market, but would eventually do so. For these projects, the returns captured by the survey were of course zero.

Again, we used the 14-year time period covered by the TechLink surveys to estimate the likelihood that projects would at some point reach the market. Using the oldest projects in the data set as a baselines, we estimated that approximately 35 percent of projects will eventually generate some sales (see figure 3). We used standard statistical techniques to generate an estimate for predicted the number of projects that will eventually achieve sales greater than $1,000 by elapsed year. This estimate is roughly in line with those produced for DOD as a whole by the National Academies.

Figure 3: Percent of projects reporting sales

Again, fewer recent projects have reached the market, and hence have any outcomes captured by survey data. Only 17 percent of projects in elapsed year three had reached the market, compared with 35 percent from year 16.

To estimate the additional revenues from these projects, we performed a two-step calculation: First, we estimated the number of additional projects that would eventually reach the market, for each elapsed year, using statistical tools to smooth the pathway to the market over time (see predicted sales in figure 3 above). Then we applied those additional projects to the estimated total revenues to be generated from that elapsed year. The first analytic step indicated that an additional 14 percent of projects that were not reporting sales at the time of the survey would eventually reach the market. The second showed that these additional projects can be expected to generate an additional $1.4 billion in sales over their product life cycles.

Standard methodologies as used in the original TechLink reports captured $17 billion out of a total of $31.5 billion in estimated total project revenues. $14.5 billion—or 46 percent—of the total value was not captured by the surveys.

Together, the missing sales from projects already in the market, and sales from projects that will at some point reach the market provide a preliminary estimate of under-reporting from innovation surveys of commercial outcomes. Standard methodologies as used in the original TechLink reports captured $17 billion out of a total of $31.5 billion in estimated total project revenues. $14.5 billion—or 46 percent—of the total value was not captured by the surveys.

What the New Data Implies

This has obvious implications. To take only one example, the TechLink report used IMPLAN economic modeling software to develop a return on investment estimate for the Federal government’s SBIR/STTR investments. Plugging revised commercial revenues into the IMPLAN model, the mythology described above showed that the already-positive impact of these programs was significantly greater than had been reported, and that the return on investment (ROI)—defined as the ration of funding provided to eventual commercial returns to companies—was in fact 22:1 compared with the previous estimate of 15:1. The estimated employment effect was 50 percent greater as well. In other words, as Congress and the administration consider these small business innovation programs, they should understand that the actual impacts are even larger than conventional estimates indicate.

About the Author

Dr. Gaster is president of Incumetrics Inc. and a visiting scholar at George Washington University. He is currently working on a book about Amazon for publication early in 2021 and is editor of the Great Disruption blog.

Between 2004 and 2017, Dr. Gaster was lead researcher on the National Academies multi-volume study of Small Business Innovation Research awards. Dr. Gaster received a Ph.D. from UC Berkeley in 1985, an M.A. from the University of Kent (U.K.) in 1978, and a B.A. from Oxford University (U.K.) in 1976. His doctoral thesis won a national academic prize. He also won a congressional fellowship at the Office of Technology Assessment, and has been a fellow at the Economic Strategy Institute.

About ITIF

The Information Technology and Innovation Foundation (ITIF) is a nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized as the world’s leading science and technology think tank, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

For more information, visit us at www.itif.org.

Endnotes


 


[1]SBIR is the Small Business Innovation Research Program; STTR is the Small Business Technology Transfer program. While there are 12 funding agencies in the Federal government, more than 97 percent of funding comes from the five major research agencies: DOD, NIH, NASA, NSF, and DOE. Total funding in 2017 was $2.67 billion for SBIR and $367 million for STTR.

[2]See for example “SBIR at the National Institutes of Health,” National Academies Press Washington DC 2015 p.290-1.

[3]See for example “SBIR at the National Institutes of Health,” National Academies Press Washington DC 2015 p.291.

[4]Robin Gaster, Will Swearingen, Jeff Peterson, and Michael Wallner, “Estimating outcomes and impacts from innovation programs: the case of Navy and Air Force SBIR/STTR programs,+” Technology and Innovation, vol 12, 2019.

[5]The paper discussed in more detail variations in outcomes by elapsed year, and the probabilities associated with the outcomes used in this analysis.

[6]Even this estimate is an under-estimate of unknown size. Some projects will continue to generate revenues after 14 elapsed years—the limits of the analysis we could complete using available data. Those additional revenues are not included here.

KEY TAKEAWAYS

It is well-established that innovation-support programs like SBIR/STTR offer significant benefits to companies and a return on the government’s investment in the form of new commercial revenues. But conventional evaluations are imprecise.
Surveys face one particularly challenging problem: They are essentially snapshots, capturing outcomes at the time they are taken. They don’t capture outcomes afterward. But new data offers a new opportunity to address this issue.
The new data suggest that the positive impact of the SBIR/STTR programs is significantly greater than has been reported—and that the return on investment was in fact 22:1 compared with the previous estimate of 15:1.
These results can be applied to other evaluations of innovation programs where surveys are used to estimate outcomes.