Since 1975
  • facebook
  • twitter

The future of work in the era of AI

Recent discussions about the implications of advances in AI for employment have largely polarized. (Reuters)
Recent discussions about the implications of advances in AI for employment have largely polarized. (Reuters)
Short Url:
15 Apr 2024 04:04:40 GMT9
15 Apr 2024 04:04:40 GMT9

Recent discussions about the implications of advances in artificial intelligence for employment have largely veered between the poles of apocalypse and utopia.

Under the former scenario, AI will disrupt a large share of all jobs, vastly exacerbating inequality as a small, capital-owning class acquires productive surpluses it previously shared with human laborers.

The latter scenario, curiously, is much the same, except that the very rich will be forced to share their gains with everyone else, through some form of universal basic income or a similar program for economic transfer. Everyone will enjoy a life of plenty and freedom, finally achieving Karl Marx’s vision of communism, in which it is “possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic.”

The assumption common to both of these scenarios is that AI will vastly increase productivity, potentially forcing even highly paid doctors, software programmers and airline pilots into unemployment, alongside truck drivers and cashiers. We are told that not only will AI be able to code better than an experienced human programmer, for example, it will also be better at performing any other tasks that this coder might be retrained to do.

If all this is true, AI will generate unheard-of wealth that even the most extraordinary sybarite would have trouble exhausting.

Both the dystopian and utopian outcomes reduce the rise of AI to a political problem: Can those left behind, who will have the advantage of numbers, compel the AI tycoons to share their wealth?

There is some reason for optimism about the answer to this question. Firstly, the gains from AI in this scenario are so extravagant that the super-rich might not mind giving up a few, marginal dollars, whether to appease their consciences or to buy social peace.

Secondly, the growing mass of the left-behind will include many highly educated, politically engaged people, who will join the others more traditionally left behind in agitating for redistribution of wealth.

But there is another, deeper question to ask: How will people respond, psychologically and politically, to the realization that they can no longer contribute to society by engaging in paid work?

Participation in the labor force has already declined significantly since the 1940s among men and, though women entered the workforce in large numbers only as recently as the 1970s and 1980s, their participation rate also has started to decline. This might well reflect a trend of people at the bottom in societies losing the capacity to convert their labor into compensable value as technology advances. AI could accelerate this trend, defenestrating people in the middle and at the top as well.

Loss of self-esteem, and a sense of meaning and usefulness, is inevitable in a society that valorizes work. 

Eric Posner

If the social surplus is shared widely, one might ask: “Who cares?” In the past, members of the upper class avoided taking jobs and viewed those who did so with disdain. They filled their time with hunting, literary pursuits, parties, political activities, hobbies and so on, and they seemed to be rather pleased with their situation (at least if you ignore the bored gentry idling in summer dachas in Anton Chekhov’s stories).

Modern economists tend to think of work in a similar way, as simply a cost that must be offset by a higher wage to induce people to provide their labor. Like Adam and Eve, they implicitly think of work as a pure “bad.” Social welfare is maximized through consumption, not through the acquisition of “good jobs.” If this view is correct, we can compensate people who lose their jobs simply by giving them money.

Maybe human psychology is flexible enough that a world of plenty, and little or no work, could be regarded as a boon rather than an apocalypse. If the aristocrats of the past, retirees of today and children of all eras can fill their time with play, hobbies and parties, perhaps the rest of us can, too.

But research indicates that the psychological harms of unemployment are significant. Even after income controls are introduced, unemployment is associated with depression, alcoholism, anxiety, social withdrawal, disruption of family relations, worse outcomes for children and even early mortality. The recent literature on “deaths of despair” provides evidence that unemployment is associated with elevated risk of overdose and suicide.

The mass unemployment linked to the “China shock” that began to affect some regions of the US around the turn of the millennium, for example, was associated with elevated mental health risks among those affected. Loss of self-esteem, and a sense of meaning and usefulness, is inevitable in a society that valorizes work and scorns the unemployed and unemployable.

As such, the long-term challenge posed by AI might be less about how to redistribute wealth and more about how to preserve jobs in a world in which human labor is no longer valued. One proposal is to tax AI at a higher rate relative to labor. Another, recently advanced by Massachusetts Institute of Technology economist David Autor, is to use government resources to shape the development of AI so that it complements, rather than substitutes for, human labor.

Neither idea is promising. If the most optimistic predictions about the future productivity benefits of AI are accurate, a tax would have to be tremendously high to have any impact. Moreover, AI applications are likely to both complement and substitute. After all, technological innovations in general enhance the productivity of some workers while eliminating tasks others traditionally carried out. If the government steps in to subsidize complementary AI — for example, algorithms that improve writing or coding — it could just as easily end up displacing jobs as preserving them.

Even if taxes or subsidies can preserve jobs that produce less value than the AI substitutes, they will merely be putting off an inevitable day of reckoning. People who derive self-esteem from their jobs do so in part because they believe that society values their work. Once it becomes clear that their work can be done better, and more cheaply, by a machine, they will no longer be able to maintain the illusion that their work matters.

If, for example, the US government had preserved the jobs of buggy whip manufacturers when automobiles displaced horse-drawn carriages, one doubts those positions would still confer much self-esteem on anyone who took them today.

Even if humans are able to adjust to a life of leisure in the long term, the most optimistic projections for AI productivity portend massive short-term disruptions to labor markets, akin to the effects of the China shock in recent decades. That means substantial, and for many people permanent, unemployment. There is no social safety net generous enough to protect people from the effects on mental health, or society from the political turmoil that would result from such widespread feelings of disappointment and alienation.

  • Eric Posner, a professor at the University of Chicago Law School, is the author of “How Antitrust Failed Workers” (Oxford University Press, 2021). ©Project Syndicate
topics
Most Popular
Recommended

return to top