ITIF Logo
ITIF Search
Studies Show AI Triggers Delirium in Leading Experts

Studies Show AI Triggers Delirium in Leading Experts

October 15, 2024

In the movie Dead Poets Society, Robin Williams, playing an earnestly idealistic English teacher, tells his eager class, “No matter what anybody tells you, words and ideas can change the world.”

If he had been playing a gruff history teacher, he might have added that bad ideas based on faulty logic also can change the world, for the worse. And if the movie had been set in the future, he might have pointed to our own time as a prime example, because Europe, the Commonwealth nations, and the United States are now enthralled with a big, bad idea that has been solemnly endorsed by a growing number of purported experts: Absent major policy intervention, they tell us, artificial intelligence (AI) will drive up income inequality to stratospheric levels.

This idea has now become an article of faith among many AI scientists, economists, pundits, and policymakers. Take, for example, an IMF article by Erik Brynjolfsson, head of Stanford’s ironically named Institute for Human-Centered AI. (What else would AI be but human-centered, given that it is developed and used by humans?) Brynjolfsson and coauthor Gabriel Unger argue:

in this higher-inequality future, as AI substitutes for high- or decently paying jobs, more workers are relegated to low-paying service jobs—such as hospital orderlies, nannies, and doormen—where some human presence is intrinsically valued and the pay is so low that businesses cannot justify the cost of a big technological investment to replace them.

This core idea is becoming de rigueur in Western intellectual circles, even though the underlying arguments supporting it are fallacious to the point of absurdity. Yet so anxious are people to believe this narrative that they don’t even question the premise. They should, though, because it is not just wrong, but dangerous in its implications.

One reason that AI fearmongering is especially appealing—and therefore generates attention for those who fan the flames—is that it compounds the common (but faulty) narrative that income inequality for workers has been increasing dramatically even without AI. But either way, it is the product of lazy, first-order thinking. Of course, a company that adopts AI or any technology to increase efficiency or reduce costs will often find that it can produce more output with the same number or fewer workers. So, that’s as far as the superficial AI fearmongering goes: Automation leads to fewer jobs!

Except, it doesn’t. Yes, when technology boosts labor productivity in a particular occupation, fewer workers might be needed in those jobs. But because that product or service will now cost less, consumers will save money, which they will spend on other things. If making a car is cheaper because of robotics, people might spend the savings on home improvement or any number of other things. Likewise, if a law firm uses AI to boost its productivity and therefore employs fewer legal assistants, then legal services will cost less, and people will be able to spend those savings on things like going out to dinner. These shifts in demand create new job opportunities in a pattern that has repeated itself at least since the emergence of agriculture.

With that in mind, there are a half-dozen or more logical fallacies responsible for stoking today’s fears about AI and the economy.

1. The Fallacy That Automating High-Wage Jobs Will Create a Lumpenproletariat

As Brookings analyst Sam Manning wrote in a recent commentary piece forecasting AI’s impact on inequality:

In sectors where AI automation significantly reduces production costs, businesses may choose to reduce their workforce if consumer demand for their products or services doesn’t increase enough to offset the productivity gains. This could lead to job losses and lower wages in affected industries.

This is a prime example of first-order thinking. Most products and services do not have very high elasticity of demand, meaning that if costs go down, demand does not go up enough to reemploy all the excess workers. But that doesn’t mean these workers are permanently out of work. They do something totally unexpected. They look at the help wanted ads and find other jobs. This has been happening for centuries. It will continue happening. Moreover, wages won’t go down for the economy overall because the workers who move to new jobs will be working in other industries that need them. In other words, there will be normal levels of employment and no downward pressure on wages.

2. The Fallacy That People in High-Wage Jobs Will Capture a Disproportionate Boost in Productivity and Earnings

As Manning writes: “In the near-term, AI-driven productivity boosts could be skewed towards high-income workers, leaving lower-wage workers behind.”

No. Wages are based principally on supply and demand of particular types of workers. This is why LeBron James makes more than a clerk at a store. It’s why workers in incredibly automated cigarette factories make about the same as workers in manufacturing industries with lower productivity. There are lots of workers available to work in these jobs.

Moreover, if AI is mostly used by high-skilled workers their productivity will increase, but not necessarily their wages. Why would a company pay more for these workers than it did before just because they have become more productive? The workers were clearly willing to work there at their current wages. Rational employers pay their workers as little as they can while still paying well enough to attract the workers they need. (Employers can’t pay workers 10 cents an hour, for example, because no one would work for so little.)

So where do the productivity gains go if they don’t go to high earners? The answer is either to higher profits or to lower prices. Let’s say that a forward-looking insurance company adopts AI before its competitors and enjoys lower costs. It probably will enjoy higher profits temporarily. But as its competitors learn that they too can lower costs with AI, some firm will cut prices to better compete, and then all will have to respond in turn. And that, by definition, reduces income inequality, as low-wage workers will be better able to afford legal services, banking services, and other outputs that higher-wage knowledge professionals produce.

So, employers don’t keep all the savings they derive from productivity as profits for the same reason they don’t pay workers 10 cents an hour: competition. As evidence, consider that domestic profits as a share of sales have changed very little in the last 50 years.

3. The Corollary That Automation Will Force People Into Dead-End Jobs Leading to No Growth

As Brynjolfsson and Unger write: “Displaced workers might disproportionately end up in even less productive and less dynamic jobs, further muting any aggregate benefit to the long-term productivity growth rate of the economy.”

No. Imagine that AI systems replace stock traders, and the stock traders switch to new jobs building a new kind of furniture. The outcome would be that we have the same amount of stock trading as before, but society also benefits from more furniture (or more of whatever else stock traders switch to: higher education, restaurant meals, etc.) It doesn’t matter whether displaced workers move into highly productive jobs, middle-productivity jobs, or low-productivity jobs. Either way, productivity (output per hour) and output both go up. This is also why the average unemployment rate in the United States has changed little in the last 100 years, even as productivity has grown almost 10 times.

4. The Fallacy That AI Will Boost the Productivity of Low-Wage Jobs, Driving Down The Number of Low-Wage Workers

While some argue that AI will automate high-wage jobs, leading to income inequality, others argue that AI will largely automate low-wage jobs, leaving those people behind, and leading to greater income inequality. Here, the argument is a bit different and is based on a more fundamental fallacy. Brynjolfsson and Unger write, “Technologists and managers design and implement AI to substitute directly for many kinds of human labor, driving down the wages of many workers.”

There are at least two problems with this view. First, let’s assume it is true. The result would be fewer low-wage jobs by definition, because AI would make those jobs more productive, allowing for the same output with fewer workers. This would reduce income inequality, because a higher share of jobs in the economy would be middle- and higher-wage jobs. Consider that an economy of 150 million jobs is comprised of one-third low-wage jobs, one-third middle-income jobs, and one-third high-wage jobs. Eliminating half of the low-wage jobs would free up those workers to move into other occupations in which new job creation would be driven by increased demand from consumers spending the new savings they accrue from lower prices for goods and services produced with automation. In the end, there would be only 37.5 million low-wage jobs, instead of 50 million, and 56.2 million high-wage and middle-income jobs, respectively. (The extra middle-income and high-wage jobs would come from the fact that spending would go up, and that spending would go toward items produced by low-wage, middle-wage, and high-wage workers.) Many low-wage U.S. workers are over-skilled and overeducated for their jobs, and only work in those jobs because there are not enough middle- and high-wage jobs. So, it would be relatively easy for them to move into higher-wage jobs as the supply of those jobs increases. For workers without the skills, that is where government workforce policy comes in.

But the AI gloomers will argue, no, these displaced workers will not find new work; they will be part of a new lumpen proletariat that is left unemployable. There will be so much excess unemployment that employers will bid down wages and the low-wage workers who remain will be even more impoverished. This is wrong on many counts. First, it succumbs to the “lump of labor” fallacy. Job loss predictions always seem to assume that there is only so much work to do. But this is obviously a false reading of the process of technological change, as it fails to include second-order effects whereby the savings from increased productivity are recycled into the economy in the form of increased demand that in turn creates other jobs.

If the price of coffee drops at Starbucks, I don’t bury those savings in the sand, I buy something else in addition. Consider an insurance firm that can use AI to handle many customer-service functions that until now were performed by humans. Let’s imagine that the technology is so good that the firm can do the same amount of work with 50 percent less labor. Some workers might take on new tasks, but others might be laid off or lost through attrition. Either way, the company’s insurance services now cost less to provide. If customers can spend less on insurance, they can spend more on other things, like vacations, eating out at restaurants, or getting gym memberships.

5. The Fallacy That Without Government Regulation AI Will Not Boost Productivity

This is actually an anti-innovation argument. As Brynjolfsson and Unger write:

The economics of AI may turn out to be of a very narrow labor-saving variety (what Daron Acemoglu and Simon Johnson call a “so-so technology,” (such as an automated grocery checkout stand), instead of one that enables workers to do something novel or powerful.

Stephanie Bell, of the Partnership on AI, agrees, writing, “Sadly, there have been plenty of examples of the opposite: automation technologies that eliminate jobs without reducing consumer prices or improving the quality of goods and services.” No, there have not been any examples of this. (Acemoglu’s bugaboo example of self-checkout at grocery stores has kept grocery prices from rising as fast as they would have otherwise, and grocery stores don’t have to pay for all the checkout workers they used to employ.)

Think about it for a moment. A company could use AI to completely automate a job so it still generates the same output, but the worker could do something else, thus increasing output even more. Think about it in the context of home appliances. With a washing machine, you not only have the benefit of clean clothes, but also more time available to cook meals. Without the washing machine, you could still have clean clothes (washed by hand), but you would have less time to cook meals. In contrast, government could require companies to keep workers in their jobs but use AI to boost their output. But the latter is always going to be less productive than the former, so prices would fall less. Imagine if we had just used technology to make it easier for telephone operators to do something novel and powerful: perhaps providing psychotherapy to callers? That would be far less productive than actually replacing them with electrical switches and letting them do something new, like become therapists.

The idea that, without government regulation (presumably to limit automation and self-service), the productivity benefits of AI will be stillborn, is in fact ludicrous. The exact opposite is true. Automation and so-called “so-so technologies” generate vastly more productivity than using technology to marginally increase a worker’s productivity.

6. The Fallacy That All the Gains From Boosting Output With AI Will Go to Capital

This is one of most pernicious and illogical ideas of the bunch. In a study published by the IMF, economist Anton Korinek creates a model, based on calculus so we are led to presume it is correct, in which he suggests that artificial general intelligence (AGI) will quadruple productivity and output in 20 years, increasing U.S. GDP from $25 trillion to $100 trillion. Sounds good, right? Not so fast. Unfortunately, according to his model, wages will remain the same at about $18 trillion, because there will be massive unemployment, so workers will not be earning anything, and the massive supply of unemployed workers will bid down wages for everyone else.

Korinek assumes that in 20 years there are simply no more jobs. AI can do everything. It can police the streets, fight fires, defend our nation in wars, educate children, provide massages and physical therapy, repair plumbing, take care of the elderly, etc. I don’t think so. But let’s say for the sake of argument that AI can do half of all tasks (though, even this is highly unlikely in 20 years), so GDP doubles. This means that households will have a median income of around $120,000 per year. Does anyone think that Americans will run out of things to buy? Smaller class sizes? A massage every couple of weeks. A nicer and longer vacation, flying first class for leg room. Higher taxes to pay for urban beautification. Human needs are virtually infinite. Most people might feel comfortable earning $1 million a year and stop spending. But that would require GDP to increase by 20 times, not four times.

Okay, but even assuming there is an additional $75 trillion in output, where would it go? It would have to go somewhere. Companies would not dump their output into the ocean or produce services with no customers. It is a rule of economics that since output was produced, it had to be consumed, leaving aside residual waste. The U.S. Bureau of Economic Analysis’s national income accounts provide an answer. GDP can be measured by adding up all expenditures or all income (the two numbers equal each other, leaving aside residual differences). So, let’s look at the income side.

According to BEA, components of gross domestic income include compensation to employees, taxes on production and imports, net interest, proprietor’s income, rental income, corporate profits, and consumption of fixed capital.

Clearly, the additional $75 billion can’t go to taxes on production and imports. It can’t go to net interest, because if anything the massive increase in capital that comes with the expansion of GDP will lead to an excess of savings, which will drive down interest rates. Proprietor’s income is not likely to be the recipient, as according to the author proprietors are likely to be automated as regular workers. It can’t go to rental income, as this would imply more demand for housing, which is not possible if workers are not making any more money. It can’t be consumption of fixed capital, either, because by definition there will be a lot more capital in the economy, and even if AI depreciates quickly, the depreciation won’t happen entirely in year one.

So, all that is left will profits. Ah, that must be the culprit. Those selfish monopolistic capitalists! As Brookings analyst Sam Manning writes: “In the slightly longer term, AI-driven labor automation could increase the share of income going to capital at the expense of the labor share.” But again, a closer look reveals how ludicrous this statement is. Under Korinek’s scenario, profits, which were $3.4 trillion in 2023, would increase to $78 trillion in 2033. Wow. That would mean that companies’ profit rates would increase from around 13 percent to nearly 100 percent. This is simply absurd.

But there is another, more straightforward reason why this is nonsense: Korinek, Manning, and others assume AI ends competition. If companies could, they would raise prices as high as possible to maximize profits. But in a capitalist system they cannot, because competition prevents it. This is why domestic corporate profits as a share of GDP are the same now as they were 50 years ago, even though productivity has risen significantly. The idea that AI would eliminate competition implies that one company dominates all AI, or that one AI company dominates each industry segment (one bank, one insurance company, one law firm, etc.) and that there is absolutely no threat of further market entry. Under no scenario is this remotely realistic. If for no other reason than the fact that AI is a tool that many industries will use, and these industries will still be competitive.

There is a variant of this “capitalists gain all” tale. Brynjolfsson and Unger write that the “‘visible hand’ of top management managing resources inside the largest firms, now backed by AI, allows the firm to become even more efficient, challenging the Hayekian advantages of small firms’ local knowledge in a decentralized market.” Who cares? As long as we don’t have many sectors with monopoly pricing power (as we currently do not have) this dynamic will simply mean higher economic growth (large firms are more productive) and higher wages (large firms pay their workers more than small firms). Throwing in Hayek to signal to conservatives that you like markets does not change this reality.

7. The Fallacy That Only Government AI Regulation Will Save Us

All this handwringing and doom forecasting is a setup to justify massive government intervention in the economy. According to many advocates and prognosticators, without government intervention AI will not even be humane (a term that makes little sense). Exactly why would businesses and consumers buy non-human AI, whatever that is? We don’t have humane furniture policy. Or humane banking policy. Or humane bowling alley policy. Organizations compete in the marketplace, and if they provide inhumane products, including AI, they don’t sell. (Some might argue that guns are inhumane, but not to the military, police, hunters, and people who use them for home defense.) Of course, there is tort law as well as regulations in many industries, but that is very different than regulating specific technologies, like computing or AI.

As Brynjolfsson and Unger write:

For each of the forks in the road, the path that leads to a worse future is the one of least resistance and results in low productivity growth, higher income inequality, and higher industrial concentration. Getting to the good path of the fork will require hard work—smart policy interventions that help shape the future of technology and the economy.

No, it does not. And no, it won’t.

One policy that many regulatory advocates want is to pressure or tax companies into never using technology to lay off workers. Korinek has advocated for steering AI development in the direction of job creation, rather than job displacement. What? We have full employment, and when we occasionally don’t, the Fed addresses the problem with monetary policy. What we do not have is strong productivity growth—and by and large, historically, that has come from automation freeing up workers to do other things. Steering technology in the direction of job creation leads to preposterous policies like the bans on self-service elevators, as one often finds in India. Policies like that create lots of jobs. But they also reduce national income, because people are doing a completely unneeded tasks. This kind of thinking is dangerously radical and completely counter to the historical U.S. vision of technology and innovation.

Another wide scheme many AI interventionists promote is universal basic income (UBI). According to the proponents, AI will create levels of unemployment that make the Great Depression look like full employment, and the only answer is for the government to give everyone free money. There is perhaps no more nefarious concept than that. It destroys the idea that citizens have a responsibility to contribute to society—whether it is building houses, educating children, or providing financial services. And it would create a class of people who are demoted to welfare with little incentive to improve themselves.

I used to think that UBI was the ultimate in intervention, but I was wrong. Dario Amodei, CEO of the AI company Anthropic, recently said we will need more than UBI to solve income inequality: “I think in the long run, we’re really going to need to think about how do we organize the economy, and how humans think about their lives?” Open AI CEO Sam Altman agrees, proposing the truly strange idea of a “universal basic compute” where people would own a share of the LLMs. If not already there, these ideas verge on the idea of overthrowing capitalism and replacing it with communism—“from each according to his abilities, to each according to his needs.”

Imagine if we had today’s intellectuals a century ago. I can hear them now: Tractors are killing farm worker jobs! Instead of so-so tractor technology, we should attach motors to workers’ arms to help them scythe more efficiently… And of course: Tractors will only enrich “Big Farmer” and do nothing to reduce food prices! Thankfully, common sense prevailed then, and we can only hope it will again in our era.

So, what has changed? Why do most elites now oppose tech-based automation, especially from AI, and paint dystopian pictures of capitalist oppression of the proletariat? The answer, I believe, is that they have come to believe in the ultimate sacredness of the self: Anything that might in any way inconvenience an individual, including a worker, must be opposed, even if the inconvenience is in the public interest. We have flipped from a society in which John F. Kennedy could implore us to “ask not what your country can do for you; ask what you can for your country” to one with an ethos of “ask not what you can do for your country; ask what your country can do for you.” And in this ideology of radically self-centered individualism—shared by many on both the left and the right—what America can do for workers is to never allow them to lose their jobs. Once you understand that this is their motivation, even if it is not conscious, all of this starts to make sense. We now live in a world where the individual’s rights are privileged above all else, including productivity growth that benefits society as a whole.

We can see this impulse play out clearly in a new Brookings report, “Generative AI, the American worker, and the future of work.” The authors ask and attempt to answer three questions:

“How do we ensure workers can proactively shape generative AI’s design and deployment?”

“What will it take to make sure workers benefit meaningfully from its gains?”

“And what guardrails are needed for workers to avoid harms as much as possible?”

The answers to those questions should be pretty clear. For the first two questions: We don’t need to. For the third question: We don’t need guardrails. These questions are all code for ensuring AI is not ever used to replace a worker. If that’s the goal, then yes, government policy is needed. But if the goal is to maximize total societal welfare, then we should reject these handwringing calls for the government to put handcuffs on AI.

None of this is meant to suggest that we don’t also need a world-class system of worker-training and adjustment policies optimized for an era of technological change. We certainly do. But at the end of the day, AI does not repeal the laws or logic of economics, such as the lump of labor fallacy or the idea that competitive markets limit profits. So, can we stop with the need for “human-centered AI” and all the fears of a looming economic dystopia. They just serve to gin up fear and distract from the real task at hand: speeding up development and adoption of AI to automate jobs and otherwise boost productivity so Americans of all incomes can live better and more prosperous lives.

Back to Top