The grounding assumptions of the model:
Sometime in the relatively near future, massive automation of white-collar workers will begin, because the truly fundamental advancements needed to make this possible have already happened. We will enter the period of White collar replacement. Call those workers whose job could, in theory, be done by AI controlling a computer white-collar replaceable.
The value of labor will plummet, especially if robotics can also begin to replace blue-collar jobs. There is a danger of a permanent lock-in of inequality due to a massive devaluation of human labor. We will refer to this situation, anachronistically, as technofeudalism.
A worker’s resistance will arise due to massive job loss.
Given those assumptions, what will happen? Well, it depends on a series of unresolved questions. Although it’s only a “gathering of unknown variables” I thought it might be useful, on the eve of this possible Armageddon, to lay out those variables:
How long will all of this take?
At what point will, say, 10% of current jobs no longer exist due to AI? Conditional on the assumption that deep learning in roughly the current paradigm can do it, reasonable 95% confidence interval bars might be 6 months to 7 years.
As we will discuss below, different assumptions in modeling imply either sudden massive job loss or slower cumulative job loss will lead to a stronger resistance. It all depends on where you think the worker’s power lies.
What will the price spread for AI look like, and how will it evolve over time?
The model most people adjacent to this area have in their head at the moment is that someone will create a machine that can replace office workers, and it will be relatively affordable right away. This has never been certain, but recent developments have made it even more questionable. OpenAI spent over a million dollars running O3 on the ARC-AGI benchmark, a series of problems that you or I could do in 2-3 days.
Likely there will be a Pareto frontier between the very best and most expensive and the very worst and cheapest AI. The cost needed to replace particular jobs may vary. Does this help or hinder the task of creating a worker’s resistance?
High initial prices might be good. It might give advance warning that something is coming, allowing people time to prepare to fight for their jobs, etc. Alternatively, high initial prices and a subsequent slow rollout might make us like the proverbial frog in the pot.
Do you think our main leverage is our industrial power or our numbers?
Workers have two factors on their side- numbers and industrial power. Industrial power here refers to the capacity workers have to intervene in the economy and politics by withholding their labor.
If you think our main leverage is our numbers, a sudden job loss shock could strengthen the working class. There would be little room for denial about what was happening, and no way for a layer of workers to convince themselves they would be okay because they were the “good” ones.
If on the other hand, you think our main leverage is our industrial power, a slower pace of automation might be preferable. A sudden unemployment shock would greatly reduce the power of withholding our labor.
How will political parties align
Will there be a straightforwardly “anti-AI” party? The democratic model of politics suggests yes- since it will be in many people’s interests to oppose these changes. On the elite-driven model, things are less clear. My guess is that there will be an anti-AI party because anything this big will have losers among the elites.
Will that party be the Republicans or the Democrats? I’d bet a lot of money on the Democrats if it’s one of the two. I’d also bet their opposition will be far from unqualified. Another possibility is a realignment event- something that splits each of the parties into a pro and an anti AI faction.
Will denial or fear predominate?
Will people respond by trying to convince themselves that AI will never replace them, or with open fear? So far denial is predominating, but that may not last.
How soon will cheap robotics follow up the automation of office work?
Current research on robotics is advancing rapidly, but when will robots be able to replace e.g. a builder’s laborer? This is deeply unclear. A large gap in intellectual versus physical labor automation may feed into the white-collar blue-collar dynamics discussed below, but a delay will leave a section or workers with important industrial leverage.
How rapidly will military power be automated?
If military power is automated quickly the relevance of human protest and opposition is greatly reduced. The sine non qua of the successful revolution [should it come to that] in modern times is the military switching sides. If the human component of the military is much smaller and much more insulated from direct fighting, this is harder to engineer.
Will people buy into the inevitable attempt to turn white and blue-collar workers against each other
I can just about guarantee, that we’re going to hear something like the following line: “Those lazy pen pushers in their airconditioned offices have had it too good for too long. We should rejoice that AI will replace them, so good, honest people who work with their hands can take center stage.”
Will this work? To what degree does white-collar replaceable describe a sociological cluster of workers, less likely to have friendly or familiar relations with blue-collar workers, and from whom blue-collar workers see themselves as distinct?
How strong are the regulatory requirements that certain jobs be done by humans
Numerous jobs by law must be done by humans. Which of these requirements will hold up?
How many bullshit jobs are there
A common view is that many white-collar jobs exist to inflate the ego of some managers. If this is true, it suggests that certain jobs will resist automation since the work was never the point. This could be good or bad. It could be good because it might increase the industrial power of the working class since a section of jobs cannot be automated after all. It could be bad, because it could divide the working class into a relatively replaceable sector, and a relatively powerful, prestigious sector of bullshit-job flunkies who feel insulated against AI.
How much harder or easier is “open” work than routine work
Roughly speaking, we might divide white collar work into two kinds of tasks. Routine and Open. Speaking of some white-collar tasks as Routine is not intended to downplay the skills involved in these tasks- my own job is Routine. The two exist on a spectrum. Routine tasks include: most administration, data entry, sales, form filling and so on. Open tasks include research, extended writing, complex situationallys specific advice and so on.
Which is easier to automate is (perhaps surprisingly) something of an open question. The relative speed of the automation of each will affect the class composition of the unemployed.
How strong a role will geopolitical competition play
The U.S. — China Economic and Security Review Commission, an independent commision of US Congress recommended in November 2024 that the United States should:
“[E]stablish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task.”
Geostrategic involvement of the government could affect the outlook in a number of ways. One possibility is that AI firms come under greater oversight which could have all sorts of effects, General Paul Nakasone already sits on the board of OpenAI, and this is doubtless not a coincidence. Another possibility is that the rate of progress in the underlying technology increases, as competition creates pressure for government to support progress and remove barriers. Finally, there might be pressure in favour of automation so the United States and its allies should not fall behind in great power economic competition.
It’s hard to see how great power politics doesn’t become a factor- but how big of a factor?
Will the business landscape of AI change?
At present, famously, there is no moat. No one has been able to decisively gain monopoly power in the AI marketplace- for example, by having qualitatively better AI than their competitors. There are numerous competing companies. It’s a consumer’s market.
A monopolist might slow or speed up progress- it’s unclear. Likely though it would slow down access to progress by raising prices. Its effects on the political situation would be enormous- but hard to anticipate. My best guess is that the presence of a monopoly would strengthen the sense of unfairness and arbitrariness about the process, and that would strengthen resistance. People love a good villain.
Will the government get into the AI business as its strategic interest increases?
I believe that at literally any other point in history, even during the medieval ages, the people building the earth-shattering device would have been nationalized. It is comical that a US government agency is talking about a “Manhattan project” to create AGI yet has no plans whatsoever to nationalize it. But it goes against the Zeitgeist, I suppose. We will see if this holds up.
AI moral paitency
We have staggering little understanding of what it means to be a moral patient- an entity worthy of moral concern. At some point, a debate about whether or not the models we interact with have any moral patiency will become more prominent. So far, people trying to stop AI mostly seem to viscerally resent the models, which is natural, but which is the exact opposite of what would be helpful if the aim is to slow down their creation and deployment. A wild card, I’ll admit, but one which interests me.
Repeating a point I made previously, we are long overdue for a reduction in working hours. A 10 per cent increase in productivity, combined with a bunch of other benefits (more satisfied workers, less turnover) would make it easy to deliver a four-day week. And we achieved much bigger reductions in the century or so after 1870.
So what is really driving points 1-3 is the assumption that (as has happened for much of the past 40 years or so) bosses will succeed in appropriate the benefits from increased productivity. That might happen if control over access to AI is very tight, but at the moment, as you say, there are no moats. Any worker can get access to a pretty good AI for very little, and employers can't do much to control that.
The boom in remote work, often using computers over which bosses have limited control is closely related here https://johnquiggin.com/2024/05/01/machines-and-tools/
Jobs are a means to an end. Vastly increasing productivity - wealth - is good.