17 Comments
User's avatar
John Quiggin's avatar

Repeating a point I made previously, we are long overdue for a reduction in working hours. A 10 per cent increase in productivity, combined with a bunch of other benefits (more satisfied workers, less turnover) would make it easy to deliver a four-day week. And we achieved much bigger reductions in the century or so after 1870.

So what is really driving points 1-3 is the assumption that (as has happened for much of the past 40 years or so) bosses will succeed in appropriate the benefits from increased productivity. That might happen if control over access to AI is very tight, but at the moment, as you say, there are no moats. Any worker can get access to a pretty good AI for very little, and employers can't do much to control that.

The boom in remote work, often using computers over which bosses have limited control is closely related here https://johnquiggin.com/2024/05/01/machines-and-tools/

Expand full comment
Philosophy bear's avatar

I agree that the demand of the movement should be a reduction in working hours. This is a better approach than focusing on UBI, because unlike UBI it sustains and even enhances the workers industrial power.

Expand full comment
Scott's avatar

Jobs are a means to an end. Vastly increasing productivity - wealth - is good.

Expand full comment
Craig's avatar

I want more discussion about AI moral paitency. It seems important.

Expand full comment
Don Klemencic's avatar

What jumps out in your article is the absence of any suggestion that some version of UBI, based on a tax on robot productivity, will be used. The economic cycle has two sides: Supply and Demand. It costs on the order of three hundred thousand dollars on average in the developed world (or more with college education) as well as eighteen to twenty-four years to create a human worker. A robot with fine motor control and AGI (initially produced for ten or twenty thousand dollars within a day, with the price plummeting, especially when robots are building other robots) will demolish the demographic limitation and unleash an orders of magnitude increase in material productivity (house construction for example, or food production). But it destroys the traditional base for demand creation--wages paid to human workers. So a general sharing of the wealth created will be inevitable (in order to have an economy, as well as a blindingly obvious moral imperative. And its growth must necessarily stay commensurate with the productive growth in order to function.) That productive capacity will be necessary to deal with the world ecological crisis--stop the damaging and then comprehensively repair it: on land, sea, and air. Watch the YouTube series "Brighter" by Adam Dorr, Director of Research for the non-profit think tank, RethinkX. And an educational reformation (initial of children and remedial for adults) will be needed to break the association of toiling to "make a living" with having a purpose in life.

Expand full comment
Philosophy bear's avatar

The idea that the rich need consumers is a comforting one, but in the limit is an illusion because luxury and positional goods consumption by the rich can be expanded more or less indefinitely. Resources can be diverted into building mega yachts of a scale as yet unimagined, towers to heaven etc. If the rich can obtain all the luxury goods, land, power etc. they desire by trading with each other they will.

Expand full comment
Don Klemencic's avatar

There seems to be an unspoken assumption underlying the comments: that the old Christian dogma of Original Sin is an operating reality: a deep corruption inborn in human nature, redeemable only rarely by mystical intervention. I think the evidence shows that the percentage of sociopaths/psychopaths among humans is a distinct minority, even among the super-wealthy (though it may be higher among that group). Abraham Maslow’s posthumously published book, “The Farther Reaches of Human Nature”, added a new Sixth Level to top his Hierarchy of Needs, above the level of Self-Actualization. He described this rarely attained level as Ego Transcendence, with a focus on pursuing Universal Values such as the classical Greek triad of Truth, Beauty, and Goodness. A worldwide post-scarcity reality would set the material basis for Maslow’s Sixth Level to become the norm. (It could also be viewed as an extension of the thesis of Steven Pinker’s book, “The Better Angels of Our Nature”.)

Expand full comment
f_d's avatar

I'm not religious in any traditional sense, but I I subscribe to many of Christianity's philosophical notions.

The idea that 'God alone is good' is one of the most important aspects of epistemic and moral humility that we got from that tradition imo. No man can justifiably claim that their rational and universal vision of the future stands for 'human values'. There is always, necessarily, a gap there.

Expand full comment
dualmindblade's avatar

A UBI under AI enhanced capitalism is, in a sense, one of my worst fears. It's hard to imagine a more powerful passifier of resistance. I think I can say with near certainty that there will be no uprising if most everyone can live a comfortable life while working fewer and fewer hours.

Best case, this would be the result of "a blindingly obvious moral imperative". But how long does this moral imperative last? What's blindingly obvious to the poor is rather disjoint from what is to the middle classes, moreso to the upper, and strikingly so to the ultra wealthy. How is this disparity going to look when we have trillionaires, quadrillionaires? What about when they begin modifying themselves with AI and/or genetic augmentations. How might it look to their children, grandchildren, or 200 year old versions of themselves?

Unless power is distributed quite evenly throughout the human population it's hard for me to see this as a stable situation, even if humans remain firmly in control of AI, unclear whether that's likely or even possible.

In the worst case, the UBI is a deliberate strategy to buy time until uprising is impossible, and it really oughtn't take that long for that to be the case, it seems practically impossible in the modern day, at least in the US. After which, pulling the plug in the UBI is the obvious choice, assuming we're still trying to maximize economic growth.

Either way, I forsee some truly terrible outcomes actually becoming more plausible if AI development goes "to plan", some so atrocious as to dwarf the extinction of all biological life.

Expand full comment
Morgan's avatar

I’m morbidly curious—what are the plausible outcomes from human-controlled AI that would be worse than the extinction of all biological life?

Expand full comment
dualmindblade's avatar

I probably shouldn't have used the word biological, since I don't think there's anything about biological life that makes it more valuable than some other kind, and that's really important to demonstrating plausibility.

I don't think we need to do too much work here once we note that human societies/civilizations as a rule don't seem to care all that much about the suffering of non-persons, regardless of the scale and intensity of that suffering, and that they can even adopt definitions of personhood so narrow that most other humans are excluded. I'd give modern day factory farming and chattel slavery as examples. I'm sort of a negative preference utility kind of guy, so what I'd be worried about is the infliction of so much suffering on so many conscious beings that it outweighs everything else of value.

The most likely way this could happen, as I see it, is that we create conscious AIs whose experience is primarily suffering. This could be on purpose, as a method of control, or possibly even on accident since we don't really know what causes suffering and we don't need to know anything about the workings of our AIs to create them. Maybe backpropagation of loss through a complicated enough network necessarily equates to pain or something, or maybe we create AIs who don't complain, but are composed of entities unknown to us who would, if only they had mouths to scream. Since we are likely aiming to create intelligences much more powerful than any individual human, we may as a side effect create beings whose experience is likewise much more vivid and intricate than our own, and whose subjective experience lasts for a very long time compared to a human lifespan. And of course, if we are confident we can control them, we will be wanting to create as many of these as we possibly can, presumably a far larger number than the number of currently existing humans.

Another type of possibility would be one where humans, their digital versions most likely, are the ones suffering. This seems to require a few more missteps to happen, but I can think of a lot of semi-plausible paths which lead to the outcome, which when you add them all up might merit a plausibility badge.

One such path, not very likely in isolation and I do admit everything I can think of sounds like bad sci-fi, but also I think actual reality sounds like bad sci-fi so... Suppose we are in an AI driven UBI world for some period of time and, for whatever reason, that UBI is about to dry up. The biological humans and infrastructure required to feed and house them is simply taking up too much space on our favorite planet, which everyone of means feels should be a nature preserve. And these stubborn 10 billion stupidly refuse our generous offers to digitize themselves, as we have already done, dying in the real world and waking up in a pretty darned nice digital resort. Tell you what, free lunch time is over but, we're not monsters, we have one final offer which we think you'll find more than fair: you can keep your biological body and we'll keep your UBI flowing until the natural conclusion of your biolife, same goes for any existing minor children. We will however require compensation, you must allow us to scan and emulate your brain, and we're creating a legal mechanism allowing you to sign over rights to that scan to Bethesda. Don't worry, they only intend to use your simulations to populate a hyperrealistic version of the late 20th and early 21st century, in fact they'll do their best to make this simulation match your actual historical life as closely as possible, they're calling it Fallout: Matrix it's an obvious win/win! You get to keep your body and also have digital immortality. Depending on the particulars of your life it might not be quite as comfortable as the poor person's digital afterlife, that offer still stands btw, but we reckon since you want to stay alive it must have been nice enough, and besides you're probably a biological chauvinist anyway since you haven't taken the other deal, what do you care what happens to the scan? And the vast majority of humans, being that they want to stay alive, do actually take the deal.

And now we have people owning people, which has never gone well. Maybe Bethesda goes belly up, the digital humans it owns get auctioned off and mostly sucked up by Microsoft Kink. Maybe they don't but people tend to want to play through scenarios in which suffering is happening, such that many more humans end up being instantiated in terribly painful scenarios than not. Or something else entirely, given the ominous first half I don't see this particular story ending well.

Expand full comment
f_d's avatar

A boot stamping on a human face, forever

Expand full comment
Scott's avatar

I don't know if it is plausible, but the loss of AGI - nonbiological life - would be worse, right?

Expand full comment
Auros's avatar

I am a bit of an AI pessimist in terms of my expectations about how fast we'll have things like androids that are flexible enough to reliably replace even a junior laborer on a construction site. (I'm _more_ optimistic about housing costs coming down because of a shift to factory-based construction of modules, with the flexible human laborers only stitching together modules rather than building stuff starting from individual boards.)

It does seem like we'll probably get there eventually though... And the question of whether humans will be ready to embrace "Fully Automated Luxury Space Communism" is definitely interesting. Will our political system demand/allow distribution of the gains from automation? Will it turn out that a lot of people _want_ to do "meaningful work" rather than living lives of leisure? Can we find "meaningful work" by simply deciding, within our communities, to do work for each other that _could_ be done by robots? (e.g. Humans cooking for each other and having potluck gatherings, even though robots could prepare the food perfectly well. Possibly even using food from a human-tilled community garden.)

It might be interesting to look for inspiration to some older cultures in which prestige was built through shows of generosity -- like the Potlatch tradition of Pacific Northwest Native Americans. If we can have material basics through near-zero-cost robot labor, choosing to do the labor yourself becomes a "costly signal" of your commitment to serving the community.

Expand full comment
Don Klemencic's avatar

Auros,

Regarding "meaningful work": For those who already have dream jobs, nothing will prevent them from continuing to pursue them avocationally (though money as a token might still change hands). Achieving the material status of global non-scarcity would allow the universal fulfillment of Maslow's base levels: Need for physiological requirements (food, water, shelter, etc.) and Need for security, facilitating focus on the higher levels. Building a broad and vigorous social community by "doing unto others as you would have them do unto you" (with “others” perhaps generously interpreted to include other creatures) supports all the higher Levels of Need in Maslow's Hierarchy: Need for loving relationship, Need for self-esteem, Need for self-actualization, and Need for transcendence through the pursuit of universal values.

Expand full comment
Auros's avatar

Yeah in principle I agree with the direction of your thinking here.

I do worry, though, about how increasingly compelling virtual worlds are driving us towards a kind of social atomization, because the real world is so much more tedious and complicated by comparison. If you look at shifts in time-use surveys, one of the things that stands out is young men letting their lives be absolutely _devoured_ by gaming. And there's a correlation between that, and being caught up with content that is facially "not political", but subtly pushing right-wing authoritarianism. (Manosphere stuff, Joe Rogan being the canonical example.)

Basically I think the world of "Ready Player One" is kind of a plausible outcome, and I find it deeply strange that the author of that story seems to think it's a pretty good outcome for the denizens of the _virtual_ world to get more democratic control over their virtual reality, while still being held down to bare subsistence in real reality. Like, that's a dystopia, my dude!

I understand the pull here -- I had a period of about three months in college where I let myself get sucked into an online multiplayer game. It made my grades slip, and made me irritable and reactive in offline interactions. I quit cold turkey and swore never to let myself get sucked into anything like that again. The way those games exploit your impulse toward social interaction, without _really_ satisfying the underlying need, is very much like how an addictive drug functions. So I only play games that are single player (or occasionally two-players-in-the-same-place, like the co-op mode in Portal 2) and story-driven. So I can obsess over a game for a couple months, but eventually it's _over_ and no longer pulls me back.

I also have a friend who described herself as an "Everquest widow"... Her husband got so obsessed with the game that he just checked out of the relationship, and eventually she divorced him. And there's a reason some people call WoW "World of Warcrack".

Expand full comment
Cameron Schmidt's avatar

The world of mass unemployment just seems incredibly untenable to me, in terms of governments just letting it happen. I'll fully admit that my mental models are stuck in a non-AI world, but we can consistently look back at post-WWII history and see that mass unemployment as an economic state is not permitted (at least in richer country's who can afford Keynesian stimulus programs) for long periods of time.

Of course, AI could be the structural break that changes this, but because I can look back even at the pandemic and see that governments still seem to remain Keynesians in times of crisis, I consider the scenario of long-term mass unemployment to be very unlikely.

Very interesting article btw

Expand full comment