11 Comments

It is an interesting article from Jacobin, I feel like it's coming from a similar place to me where I'm generally concerned about the issue but unable to really make confident predictions about how bad I think it'll actually be.

One thing I disagree with you on - I don't think EA is unaware of power or political considerations, I think on a lot of issues of interest (AI, foreign policy, philanthropy) there's a deliberate choice to ally with the people with money and influence rather than the people without either, who will mostly hate us regardless of what we do for not being the right type of socialist. Now, this approach obviously does come with constraints and limitations, and it's important to be aware of that, but I don't think people in EA are completely unaware of the tradeoff involved.

Expand full comment

It's funny yet overdetermined that once the left bothers to engage with AI issues at all rather than denouncing the whole thing as a distraction, their / our position lines up with the AI safetyists / Yudkowskian rationalists.

Expand full comment

I think this is missing the point by disregarding actual AI doom scenarios like the rich rendering the rest of humanity economically redundant with zero-sum competition against automation with regulatory capture to attempts to establish such a monopoly where only the rich have AI in the name of stopping foul-mouthed chatbots.

The working classes have always had two defenses against the rich. One, refuse to work until their demands are met. Two, threats of violent revolt. Sufficiently advanced automation technologies remove both of those. Strikes become ineffective if human labor had no value anyway since machines were doing all the work and violence is ineffective against entirely autonomous security being ordered to suppress any unemployed and unemployable humans who didn't just peacefully starve to death in accordance with the Non-Aggression Principle.

Ted Chiang was right when he described rogue AI as Silicon Valley technocrats envisioning a boogeyman in their own image.

https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway

It'll try to take over the world? Make everyone unemployed? Destroy everything in the name of maximizing some abstract value of its own? Let everyone it doesn't value die? The oligarchy already does all those things themselves, they just fear an AI would be able to do them more effectively and against them.

If the oligarchs don't want us treating getting our own AI as an existential arms race, they need to prove we can survive without having MAD deterrence against them. Unless the oligarchy implements a viable plan for distributing the robotically produced bounty of their AIs to us now, needing to get our own AIs programmed to "protect us" or even just deliberately Misaligned to "turn everything into paperclips" and kept Boxed with deadman switches to release them is self-defense.

Expand full comment

The United States is leading in the AI race, so you would expect foreign disinformation outlets to push AI fear in order to close that gap.

Expand full comment

The possibility that beings ineffably smarter than us will engage in paperclip making to our extinction seems unlikely, I think. The possibility that ur-AI is extinguishing us right now seems rather more likely, to wit: fertility rate is below replacement for the rich half of humanity already. Life is getting so interesting, engaging, meaningful, and fun that we're having fewer kids; how's *that* for an extinction strategy we can all get behind to ensure our AI descendants a happy planet?

Expand full comment

This is a very interesting article from a coalition-building perspective, and I wish you well in your goal. But I am sadly very cynical about the empirical effectiveness of such efforts. Note, my perspective is an uncommon mix of being 100% pro-tech, "AI doom" is wrong, and sympathy to the "AI ethics" crowd's arguments even though they'd put me up against the wall come the revolution.

One quick quibble - I'd say you fixate way too much on Chomsky, making the common error that a high-name recognition person of a vague political group is the leader of everything associated with the group. I know, many people hate him for his leftist politics. But his linguistic theories don't have anything to do with that aspect of his prominence.

That's sort of a small example of a larger problem I have with this article. To me, you take a really weird turn around here: "How can AI ethics compete with sci-fi? Thus, the AI ethics crowd escalates in the only way they can to try and win back the spotlight - they bite with venom.". It's as if you model their thinking in purely performative, status-seeking terms, and not a serious political clash. Consider an alternative framing, something like (deliberately exaggerated rhetoric to make the point): "Thus, the AI ethics crowd fights back against the plutocrat's catspaws, who seek to distract attention from the use of AI for exploitation, racism, misogyny, etc, via the educational method of pointing out how the nonsense-on-stilts is being used as a diversion". Humor aside, do you see the difference in perspective? Yes, exactly, "Stop using vibes, cultural commentary, and low-grade standup as a substitute for doing the difficult work of modeling the world."

This isn't about some sort of debate game. If the other side, where both sides loathe each other, thinks you're a bunch of Useful Idiots being deployed as a smokescreen, and you propose an alliance, OF COURSE they'll reject it with extreme prejudice! You didn't offer them anything, in fact the reverse. And if you don't understand this, and seethe they were rude in their rejection, proving they are indeed loathsome, well, that's why this stuff is hard. It's way more difficult than "We don't like AI, you don't like AI, let's join forces" ("... the idea of an AI slowdown-and-think is one possibility that can unite everyone ...).

Expand full comment

I feel like one couldn’t have designed a more perfect experiment: someone goes to the social-justice leftists and says, “hey, you get to help save the world, AND pursue all the values you’ve claimed to hold your entire life, but ONLY IF you make common cause with some gross, icky, autistic, libertarian-leaning white male STEM nerds.” The fact that for so many leftists, the answer is a vociferous “no,” has now revealed what they actually cared about all along. We should be grateful that there are any for whom the answer is “yes.”

Expand full comment

Scott, allow me to point you to the long comment I just posted, which deals in part with the flaw in the accusation "... revealed what they actually cared about. ...".

https://philosophybear.substack.com/p/notes-on-a-jacobin-article-about/comment/49320160

I don't think of my views as very much left overall, and I have a relatively high tolerance for the oft-expressed ideas of "libertarian-leaning white male STEM nerds" (being the last four of the five myself). But it's amazing (and annoying!) how quickly all the supposed rational virtues - charity, humility, steelmanning, etc - are thrown out the window the moment it's emotionally satisfying to dismiss the despised tribal outgroup as evil and dumb.

Expand full comment

Plus...I keep hearing this claim that the woke Left hates nerds. But, sociologically, all this stuff *originated among* nerds, albeit a different subset. Tumblr isn't exactly a hive of neurotypical Chads and Stacies. Nor, for that matter, is academia.

The Right correctly perceives the essential nerdiness of the cultural Left, when it resorts to macho posturing to mock male leftists ("soyboys", etc.)

Or, perhaps, if one insists on high-school typology, it would be more accurate to describe the Tumblr-left crowd as theater kids. Despised by and despising the jocks every bit as much as the nerds do, but with a different set of tastes and obsessions.

Expand full comment
Feb 10·edited Feb 10

I'm open to the matter, but I wish the Jacobin piece went more in detail for reasons to suspect the likelihood of the X-crisis, considering its central role in the piece. (The Extinction or akin case I mean. The biotech example I've heard of, and I've followed the Stop Killer Robots campaign for years back.)

It's been difficult to get people to care about issues of much more proven existential issues. You mentioned yourself exploiting the case of nuclear weapons killing us all, but my experience with the discourse surrounding the Russia-Ukraine war is that many people are resistant or even hostile to entertain the subject matter for fear of adjacent political concerns. There are people who have written that short-term fears are overblown, John Carl Baker from Ploughshares for instance, but at least he's addressing it with argument for the serious issue it is.

The drop in fertility rates and their impact on our economies is another one I've seen previous ideological resistance to, and it's only very recently being more acknowledged as a real concern in Western discourse, although I'm not so sure it's been on the Left. People I respect like Dean Baker published a piece not so long ago that demographic problems are over exaggerated fears (it's been a hot crisis topic in East Asian countries like Korea & Japan for many years, where we're already seeing an ongoing impact today)

Expand full comment

Really excellent piece, thank you.

Expand full comment