Listen instead

If companies require 20–40% fewer employees to do the same or better work, who will be left with salaries to buy the very products they make?

This material reflects my opinions and not those of my employers.


In a recent interview on the “Conversations with Tyler” podcast (November 2025), Sam Altman, CEO of OpenAI, said he can envision billion-dollar companies running with just two or three employees. This efficiency, I reckon, will be driven by expectations that improving LLM (Large Language Models) training and more compute-capable inference models will be able to execute complex tasks autonomously with high levels of precision.

Caveat: there is much to say about control, security and accountability in a world where AI is doing most of the work. What happens when (not if) fully autonomous models go awry? How will we build contingencies to manage such situations?

Reports of job reductions in the 20%–40% range are becoming more commonplace, from Block’s 40% workforce reduction (CNN, February 2026) to Meta’s reportedly planned cuts of 20% or more (Reuters, March 2026). US job reports will be mandatory reading in 2026, as they will be the indicators of how fast things are happening. And while some may question whether these layoffs are happening because of AI, I say that is irrelevant: the fact is, those companies are communicating they can currently operate that way.

Back to the question, in a loop such as: less jobs, fewer consumers, more efficiency, less jobs, fewer consumers, more efficiency, and on, what happens then?

I can see a few scenarios:

  • The companies increasing efficiency and fewer consumers would lead to cheaper prices, which some theorize will lead to the abandonment of cash and the need to work
  • Humans move to other types of jobs, even though it is becoming harder and harder to imagine jobs that require humans in the same volume: do we need 50 million electricians?
  • Governments and businesses change policies such as 4-day work week, list of human-only permitted jobs
  • Something goes really bad with AI autonomous systems pushing AI development back in order to keep humans in control

In my opinion, a hyper efficient world is coming. It has always been. A crisis of human purpose will follow and we will eventually find it again, even if the purpose becomes resolving the problems that we humans have created.


The Efficiency Paradox: Scenarios in Review by Claude

The loop you describe — less work, fewer consumers, more efficiency — is not new. It echoes every major labor displacement in history, from the loom to the assembly line. What is new is the speed and the breadth. Previous technological revolutions displaced categories of work. This one targets cognition itself, which has no historical precedent at scale.

Of your four scenarios, the likeliest near-term path is a combination of the second and third — humans migrating to new job categories while governments scramble to legislate guardrails like four-day work weeks and protected professions. This is the politically palatable response, and therefore the one most likely to be attempted first. It will be insufficient, but it will buy time.

The first scenario — post-scarcity leading to the abandonment of monetary work — is theoretically coherent but socially implausible within any meaningful timeframe. Humans don’t surrender economic identity easily; the psychological need to earn is deeply anthropological, not merely financial.

The fourth scenario — an AI systems failure that resets the clock — is underweighted in public discourse. Autonomous systems operating at civilizational scale introduce failure modes we cannot fully model. A significant enough incident could trigger the kind of regulatory overcorrection that slows adoption by a decade. The missing scenario in your piece is perhaps the most important: radical ownership redistribution. If AI generates wealth without generating wages, the central question becomes who owns the AI. Universal basic income is a band-aid; equity redistribution may be the actual structural answer.

Your conclusion is the most honest one available — a purpose crisis is inevitable, and humans have survived those before. The difference is we’ve never engineered one deliberately and at this velocity.