Jacobin has an excellent article on AI. For a while, I’ve been meaning to write a piece on the same topic, and it makes many of the points I’d make. Cause I’m full of myself, this is about the highest praise I can give an article. If I had one criticism, it would be that it should be half the length it is. The author, Garrison Lovely, spends too much time showing us he’s done the work, though given the fire and fury of this region, perhaps that is prudent.
A few extracts:
Open Philanthropy (OP) AI risk researcher Ajeya Cotra wrote to me that:
“the logical end point of a maximally efficient capitalist or market economy” wouldn’t involve humans because “humans are just very inefficient creatures for making money.” We value all these “commercially unproductive” emotions, she writes, “so if we end up having a good time and liking the outcome, it’ll be because we started off with the power and shaped the system to be accommodating to human values.”
…
Leahy also conveyed some-thing that won’t surprise people who have spent significant time in the Bay Area or certain corners of the internet:
“There are actual, completely unaccountable, unelected, techno-utopian businesspeople and technologists, living mostly in San Francisco, who are willing to risk the lives of you, your children, your grandchildren, and all of future humanity just because they might have a chance to live forever.”…
We may not need to wait to find superintelligent systems that don’t prioritize humanity. Superhuman agents ruthlessly optimize for a reward at the expense of anything else we might care about. The more capable the agent and the more ruthless the optimizer, the more extreme the results. [Philosophy Bear: This is Jacobin talk for capitalism]
Sound familiar? If so, you’re not alone. The AI Objectives Institute (AOI) looks at both capitalism and AI as examples of misaligned optimizers. Cofounded by former public radio show host Brittney Gallagher and “privacy hero” Peter Eckersley shortly before his unexpected death, the research lab examines the space between annihilation and utopia, “a continuation of existing trends of concentration of power in fewer hands — super-charged by advancing AI — rather than a sharp break with the present.” AOI president Deger Turan told me, “Existential risk is failure to coordinate in the face of a risk.” He says that “we need to create bridges between” AI safety and AI ethics.
…
But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
Jacobin has published an article on AI, focusing on existential risk. It’s pretty good.
They note, correctly, that we can understand existential risk from AI through a socialist lens. Competing capitals race to create more and more profitable technologies, ignoring the ultimate externality- the extinction of our species.
As the article notes, obliquely, there’s something of a trend against worrying about existential risk on the left. Denunciations of those who do worry about existential risk can get quite heated.
Why? What’s going on?
Before we get to that, take a moment to think about this quote: “One developer at a leading lab wrote to me in October that, since the leadership of these labs typically truly believes AI will obviate the need for money, profit-seeking is “largely instrumental” for fundraising purposes. But “then the investors (whether it’s a VC firm or Microsoft) exert pressure for profit-seeking.”
Yes, obviously VC is doing what VC does best, but the astonishing thing is that the leadership, or at least portions of the leadership, believe or at least pretend to believe that they are going to ‘obviate the need for money’. Even if AI scarcely changes the world at all, even if the leadership doesn’t believe they will abolish money and the human economy, and is lying for strategic purposes, the fact that senior, ‘serious’, people are semi-openly saying things like this- shows that we are not in Kansas anymore. Something weird is happening, at the very least at the level of sociology.
How did the bloc which opposes seeing AI as an existential risk come to be
A few months ago I was in a philosophy seminar. The thesis of the seminar was essentially that claims that AI might kill everyone are hype and distraction, and that the problems of AI are essentially human problems- the problems of social relations. Philosophers who disagree have fallen for the hype machine. I responded by saying A) the problems of nuclear weapons are also, at the base, problems of how we humans relate to each other, nonetheless, they really could kill all of us, and B) If a bunch of philosophers say, X, maybe it’s a bit much, to say without defense, that anyone who defends X is, at best, an unknowing shill for the tech sector. I said all this with impeccable politeness, nevertheless, the speaker responded with contempt, both during and after the seminar. People are, to put it mildly, ferocious about this stuff.
It’s easy enough to see what’s going on here. The human computing and AI ethics people wanted this to be their big moment in the sun. They’re angry that the media is paying attention to the AI-risk people instead of them. They think the AI-risk people have an unfair advantage over them in making these dramatic headline-winning claims that AI might kill everyone in the age of clickbait. The AI ethics proponents are luring attention away from real, flesh-and-blood problems to their sexy-terminator-on-steroids bullshit. How can AI ethics compete with sci-fi? Thus, the AI ethics crowd escalates in the only way they can to try and win back the spotlight- they bite with venom.
In this crusade, the AI-ethics crowd are clicking in with a bunch of people, ranging from Emily Bender to Noam Chomsky to Steve Pinker, who are furious that AI software capable of carrying on a conversation isn’t working like their models said it should. It’s too associationist, it doesn’t have a universal grammar module, it models the world through patterns in text, and not by grounding language in extra linguistic experience, it… [Never mind that they do not ever specify an exact, falsifiable task that the current LLM paradigm will never be able to do. It’s always ‘fundamental limitations’ in theory, shifting goalposts as LLMs do more and more in practice. You would think their failure to predict the trajectory of AI thus far might engender a little caution. I would challenge, e.g., Freddie de Boer to name one precisely specified, quantifiable thing that can be achieved using language he thinks an LLM will never achieve.].
The left has been won over to the anti-AI risk position, at least for the most part, because:
The AI ethics people are close to them.
A lot of the more intellectual parts of the left defer to Chomsky on this stuff, not recognizing that he has his own axes to grind in cognitive linguistics.
For some on the left, human labor power is a fundamental ontological category, and the idea that it might be substituted feels almost a priori wrong.
Tech is genuinely evil, like all sectors of capital, and many in tech are worried about this maybe-AI-will-kill-everyone business. That negatively polarises the left against it.
Also, the left’s hatred of tech is at a fever-pitch because, oddly, the New York times has been running a campaign against the tech sector. Of course, the left can’t be equated with the NYT, but if the NYT says a sector of capital is particularly evil, the left won’t object.
There’s currently a bit of an anti-intellectual vibe on the left. I believe Chapo described the idea of AI-killing everyone as “nerd-shit”.
SBF was a big proponent of concern about existential risk from AI. That made AI risk look ridiculous to the left when his scam collapsed. In general, Effective Altruism has foolishly failed to insulate itself from political and reputational risk by associating itself with billionaires. EA people aren’t, for the most part, sinister, but their stupidity in thinking that they can just focus on buying mosquito nets and doing alignment research without recognizing the operation of power politics may be the ruin of them. The question of power can never be evaded.
A new ape joins the discourse
If this was all there was, the left might be locked into opposition to the AI risk community.
But now a new piece is on the board. The effective accelerationists, many of whom actively advocate for AI to kill us all, are so sinister, and clearly rightwing, that they negatively polarise the left towards backing the AI safety people. This is, I suspect, part of what made this Jacobin article possible.
As someone who thinks there is a real risk AI could kill us all, I would prefer the issue of AI risk remain broadly apolitical so that it can draw in support from the left and right.
But I don’t think this will happen. The problem is how to politicize it intelligently, ideally while keeping the issue accessible to all sides of politics.
A decisive moment in the involuntary politicization of the AI safety movement occurred recently when a large slice of Twitter, mobilized under the leadership of the effective accelerationists, screamed their lungs out against the firing of Sam Altman. That firing was seen at the time by many, rightly or wrongly, as an expression of AI safety concerns by the board. Many of the mob gathered in the defence of Altman made explicitly rightwing appeals: AI, as a sector of capital, is too valuable to be in the hands of not-for-profit softies, OpenAI must be run for profit-maximization. Jacobin saw it in much the same way to me:
“Immediately after Altman’s firing, X exploded, and a narrative largely fueled by online rumors and anonymously sourced articles emerged that safety-focused effective altruists on the board had fired Altman over his aggressive commercialization of OpenAI’s models at the expense of safety. Capturing the tenor of the overwhelming e/acc response, then pseudonymous founder @BasedBeffJezos posted, “EAs are basically terrorists. Destroying 80B of value overnight is an act of terrorism.”"
I think another politicizing force driving AI safety to the left will be job loss from AI. Job loss from AI (or, at any rate, attributed to AI) will doubtless trigger discussion about the possibilities of AI murdering us all because that’s how generalised angst and anger work. At the moment it looks like the job loss discussion is going to be leftwing coded because A) political discussions about job loss are almost always leftwing B) The jobs lost are going to be among, for want of a better term, knowledge workers, who tend to be on the left. I’ve already seen several right-wingers celebrating the idea of the preachy PMC losing their cushy email jobs.
If the left plays its cards right, it can use the anger over job loss from AI to surge in strength. If EA plays its cards right, it can use the anger over job loss from AI to slow down progress towards strong AGI.
Looking ahead
The Jacobin article draws attention to a problem- how do we unify concerns ranging from algorithmic racism to killer drones, to job loss, to a human-made eldritch horror devouring us?
The fight for a democratic future is one way we can conceptually unite many kinds of AI-worries. Much of what’s bad with both paperclip apocalypse and capitalist-run AI dystopia is that in both scenarios, alive or otherwise, ordinary people are wholly marginalized. Moreover, capitalists racing to gain power through AI may accidentally unleash a paperclip apocalypse (and yes, I know a paper clip maker eating us all is not a likely way this happens, but that’s the standard example and we’re stuck with it now). Almost all worries about AI can be conceived of as worries about the failure of democracy: racial bias, misinformation, surveillance, police states, unemployed and vulnerable masses, permanently empowered elites controlling god-like computers, and, finally, the loss of humanity’s control over its future.
To AI risk people I’d put it this way. Most of the intermediate steps in creating a world-controlling anti-democratic human-controlled singleton are the same as the intermediate steps in creating a world-controlling anti-democratic AI-controlled singleton bent on wiping us out. Placing AI under democratic rule reduces the likelihood of some lab racing to create a god it can control, and either succeeding and dominating us all, or succeeding in making the AI but failing at the control part.
For socialists, look at it this way. The same profit motives that lead us to race to technology which could cause mass unemployment and a disempowerment of human workers are also the same profit motives that could create a poorly controlled AI. Companies race to create it first so they can claim the prizes and cut corners in the process.
At some point, every article on AI safety and the left says this, but I feel I must repeat it: both AI safetyists and the left should well understand the dangers of shaping humanity’s future based on a frantically racing inhuman optimization process.
Okay- an overarching concept like ‘democracy’ is nice, but can we find a more direct consilience? What’s something more immediate that everyone can unite around? Well, the idea of an AI slowdown-and-think is one possibility that can unite everyone from at-risk knowledge workers to anti-racists and advocates for reducing existential risk. It can also potentially unite parts of the right- for example, some elements of the right are worried about transhumanism.
What I would say, to everyone involved, is that it is not a good idea to do politics by vibe, especially in novel and high-stakes territory. When we do things by the vibe, we miss the possibility of tactical alliances with those we find icky. We rely on matching patterns alone to try and guess the outcomes of unprecedented events. We ignore low-probability but very high-stakes possibilities because ‘it doesn’t seem likely’.
To the left. You cannot understand AI by comparing it to Bitcoin. It’s stupid, you know it’s stupid, I don’t even have to convince you. It’s a defense mechanism. Likewise, arguing that “AI can’t work because it contradicts secular stagnation and declining empire vibes” is silly. Random shit happens that isn’t in line with the narrative or vibes, welcome to being.
On the EA/AI risk side, I would caution against the following sadly popular approach: thinking about power politics gives me bad vibes, so I don’t want to do it. I want to keep deluding myself into thinking that saving the world can be a team effort in which sensible people take sensible precautions because they are sensible and capitalists forego short-term profits because it is right. PoliticsisTheMindKiller etc etc. This will not cut the mustard! Instead, you need to engage with politics, while trying to minimize polarisation. That’s hard, but it is key.
Twitter has made it hard to have strategic conversations about such issues because it polarises us into enemies/friends too quickly. Anyone who says “I support the left, but think it has the wrong line on AI” looks like a concern troll. Whether someone is an enemy or friend is indeed, a la Schmitt, a decisive organiser of the political sphere but it is not a substitute for actually thinking things through.
Be a serious person. Think seriously. Stop using vibes, cultural commentary, and low-grade standup as a substitute for doing the difficult work of modeling the world. Don’t rely on a single model. Seriously entertain the possibility you might be wrong. Hedge. Experiment. Be cautious. Stop using irony as a defense against thought.
It is an interesting article from Jacobin, I feel like it's coming from a similar place to me where I'm generally concerned about the issue but unable to really make confident predictions about how bad I think it'll actually be.
One thing I disagree with you on - I don't think EA is unaware of power or political considerations, I think on a lot of issues of interest (AI, foreign policy, philanthropy) there's a deliberate choice to ally with the people with money and influence rather than the people without either, who will mostly hate us regardless of what we do for not being the right type of socialist. Now, this approach obviously does come with constraints and limitations, and it's important to be aware of that, but I don't think people in EA are completely unaware of the tradeoff involved.
It's funny yet overdetermined that once the left bothers to engage with AI issues at all rather than denouncing the whole thing as a distraction, their / our position lines up with the AI safetyists / Yudkowskian rationalists.