Part 11 is near my expertise: for context im a grad student at MIT who works on hardware/software codesign.
Chip controls work. Deepseek trained on gpus chinese companies no longer can legally acquire, and they cited gpu acquisition as their biggest bottleneck. The problem is that gpu controls only delay Chinese ai companies by a matter of months.
My current worry is that the US is preparing for a world where an advantage of months segways into a decisive longterm edge. This does not seem like a world I want to live in.
To point 8, I may be overly hopeful but I think the 'crunchy con' religious right are natural allies against AI. To the extent I'm a conservative it's mostly that I'm dedicated to the proposition of classical education, and afaict the sentiment in that community is hugely skeptical of AI. It takes away the work of learning. I think humanities people in general have this worry but it's important for avoiding left-right polarization because the classical ed movement is a lot of religious and conservative humanities people.
And, again probably because I'm myself religious and in that classical ed community, it did go against my intuition that you describe AI becoming polarized as a left-right issue with *the right* as the pro-AI side. But yes, the tech right is a thing now and Richard John Neuhaus conservatism doesn't have anyone's ear anymore.
I get what you're saying, but at the same time, before COVID became polarized I also wouldn't have pegged conservatives for "cavalier about disease, particularly disease of foreign origin", and the initial narrative from the left was that COVID was NBD and worrying about it was racist (in an uncomfortable similarity to the initial left narrative on AI risk). But what I then saw (from my perspective as a leftist) is that the left got with the program as soon as the threat of COVID became too obvious to ignore, while the entire other half of the country spiraled deeper and deeper into denial for the sole reason (as far as I can tell) that it would be inconvenient for Donald Trump personally if COVID was a crisis. Which is, scarily, exactly what I see happening this time around as well.
Of course, I'm pretty distant on the social graph from any conservatives, so I'm sure my impression of the way events unfolded is really limited. I'd really appreciate your perspective and to hear how it differs from mine!
I think it's possible to see a more optimistic narrative for AI safety here. For starters, the real honest to goodness anti-vaxxers seem pretty likely to be anti-AI too. At least that would be the analogous position to the one they took during Covid- opposition to the ballyhooed technological development that's being touted as a solution for all manner of problems. (And perhaps it's relevant here that the antivax base actually bent Donald Trump to their will, rather than the other way round).
The problem is that the AI development race isn't likely to include any intrusive individual coercion like vaccine mandates that would activate these people.
> And, again probably because I'm myself religious and in that classical ed community, it did go against my intuition that you describe AI becoming polarized as a left-right issue with *the right* as the pro-AI side.
Yeah, this is a major obstacle in assembling a coalition against the californian ideology. The left - from which I stubbornly insist on excluding radlibs - for which politics is downstream of economics, struggles to see fusionist politics as anything other than the rich pulling one over on the hicks yet again - which, even if correct, is not the right mindset for political persuasion. The cultural right, meanwhile, thinks "politics is downstream from culture", and struggles to see different groups of educated urbanites as actually *different* at all.
I am overgeneralizing, of course, but these tendencies are real nonetheless.
Yeah I see what you mean. To me (intuitively) Elon Musk is definitely not 'the right' because 'the right' (intuitively) is 'conservatives' and 'conservatives' (intuitively) is a bunch of religious families who care a lot about liturgy and the Western canon and tried to get ballot status for the American Solidarity Party. Is this a product of fusionism letting everyone think that conservatism at large is the part of it they're invested in and care about? Maybe?
I'm not really sure where such people are to go, the right now increasingly Nietzschean and the left maintaining its excommunication of all pro-lifers.
Frankly, speaking as someone very much pro-choice, stop looking to other factions to tell you you're invited. You can either try the conventional fusionist strategy, which in all fairness has not worked out terribly if abortion is the only thing you care about, you can try to form a coalition with the most "class-reductionist" elements of the left (to borrow the enemy's language; I would just call them *left*), or you can go on crying alone in the wilderness, but whatever you choose to do it is *your choice*.
Is that my liberal individualism speaking? Yeah, probably. But then again, the left has always promised a synthesis of universalist christian charity and the enlightenment project, not a negation of either.
With most religious conservatives that I've met, the main issue to make them understand the dangers of AI was that they didn't believe in AI capabilities (with the usual moving goal posts of "this is not real intelligence). This will naturally change as capabilities become ever more obvious.
Also the Pope has been rather good on that subject so that should at least help with Catholics (right now Francis is very unpopular with conservative Catholics but he is old and it's highly likely his successor will be just as anti-AI).
To your first paragraph, I agree. That's been my experience too. But many people who for theological reasons will balk at the idea of artificial intelligence are nonetheless very persuadable by discussion of the social effects of AI. This is why I brought up the classical ed people. If you try to make the most extreme x risk cases to them you'll bog down trying to prove that human engineering could create a 'real' intelligence. But if you talk about the much more imminent effects and risks--rampant cheating with LLMs, AI chatbots offering an alternative to relationship formation, the risk of mass disemployment, etc.--you'll find very ready agreement that AI ought to be heavily restricted.
tell me Nathaniel, with your conservatism as a religious frame are you, as I suspect, is it more Protestant as relationship with Christ and believing in belief (Fideism) , or Catholic where everyone has their place in obedience to the church, in which Fideism is a heresy (an individualist heresy).
I ask because WEIRDness is regarded as a result of the Protestant movement from which libertarian California individualism is a direct successor.
Would you be okay with non-Protestant restrictions on AI that impinge on that tradition.
<thinks> must look up the Butlerian Jihad to see where it sits on this dimension</thinks>
I'm a Catholic, actually, and I've no particular commitment to the values of the Unintended Reformation that created the world we inhabit now. Butlerian Jihad? Sign me up.
This just remind me of the past half a century of attempts at environmentalist outreach to conservatives by arguing they should be natural allies to the environmental movement because it's all about conserving nature, with approximately nil effect on actual conservative support for green policies. Even in continental Europe where centre-right parties are generally Christian democrats influenced by Catholic social teaching on the subject, environmentalism is still largely associated with the political left. While in the US, well, lol.
Trying to make religious conservatives into AI safetyists is going to be an even more uphill battle when AI safetyists are largely not anti-AI and would consider technological stagnation to be just as bad as AGI-caused human extinction (Bostrom's "astronomical waste"). The end goal of even AI safetyists coming from a Silicon Valley libertarian background like EY is, very much, well, "fully automated luxury gay space communism", so to speak (they themselves will generally call it "post-scarcity" – a term invented by left-wing anarchist Murray Bookchin).
>This just remind me of the past half a century of attempts at environmentalist outreach to conservatives by arguing they should be natural allies to the environmental movement because it's all about conserving nature, with approximately nil effect on actual conservative support for green policies.
Was this something they were saying to themselves, or saying to conservatives? I've never heard this idea before.
I presume you're American, so this may not be particularly salient to you as it is to us in culturally Catholic countries with strong Christian democratic parties, but I expect @Nathaniel L to know about that kind of rhetoric if he is a political Catholic. It's very much something not only greens are saying to conservatives, but that specific subsection of conservatives are saying to themselves.
Even in the US you will see that kind of rhetoric with the whole "conservative conservationist" angle, and the more apartisan NGOs like the Sierra Club using it to bolster a "neither left neither right" image for the environmental movement.
I think both this historical conservative case for environmentalism and @Nathaniel L's conservative case for AI-safetyism being both rooted in political Catholic reverence for conserving nature against technologically induced existential threats makes this a far closer match for a comparison than any of the ideas in your list, which fall either under the category of generic populism ("people who dislike the establishment because it's too right-wing and people who dislike the establishment because it's too left-wing both dislike the establishment"), or right-wingers opportunistically trying to use progressive-sounding rhetoric (which in fact is the exact opposite of what we're talking about here).
If left-wing opposition to AI takes the form of populist rhetoric, such rhetoric could spread to the Republicans, which is a rather populist party right now.
>right-wingers opportunistically trying to use progressive-sounding rhetoric (which in fact is the exact opposite of what we're talking about here).
What are we talking about? I thought we were talking about trying to slow AI through the political system. If that's the case, I think by default all strategies should be on the table. One possible strategy is get progressives to complain about AI, and hope that right-wingers will opportunistically borrow their anti-AI rhetoric.
>If left-wing opposition to AI takes the form of populist rhetoric, such rhetoric could spread to the Republicans, which is a rather populist party right now.
When I say "generic populism" I mean specifically the truism that people who oppose the establishment because they find it too right-wing and people who oppose the establishment because they find it too right-wing both oppose establishment. In your list things like "US federal institutions like the FBI are generally corrupt and need to be dismantled." and "We can't trust elites. They control the media. They're out for themselves rather than ordinary Americans.".
Populist rhetoric in this context would mean accusing AI of serving the establishment. Which. I guess you could say when right-wingers complain about "woke AI" they're doing that with regard to AI ethicist arguments. But that's not relevant to AI safety.
If "left-wing opposition to AI taking the form of populist rhetoric" just mean attacking the establishment for ignoring AI x-risk for financial and ideological reasons, then that is a more specific ideological criticism, and there is little reason to expect it to cross the aisle more than any left-wing populist criticism of corporate misbehavior would.
> What are we talking about? I thought we were talking about trying to slow AI through the political system. If that's the case, I think by default all strategies should be on the table. One possible strategy is get progressives to complain about AI, and hope that right-wingers will opportunistically borrow their anti-AI rhetoric.
Nathaniel's argument is doing the exact opposite though. Using right-coded rhetoric (preservation of divinely ordained nature and classical culture) to pass AI safetyist policies. While your examples were using left-coded rhetoric (e.g. "protecting women's spaces") to pass conservative policies (e.g. strict sex segregation and hate campaigns against gender and sexual minorities).
>When I say "generic populism" I mean specifically the truism that people who oppose the establishment because they find it too right-wing and people who oppose the establishment because they find it too right-wing both oppose establishment.
I'm not sure this is a correct description of populism. Populism could also be generic distrust of self-dealing elites. One factor said to contribute to Trump's popularity is the response to the 2008 financial crisis. Assuming that's actually true, this would represent a case of right-wingers distrusting corporate elites due to misbehavior.
>I guess you could say when right-wingers complain about "woke AI" they're doing that with regard to AI ethicist arguments. But that's not relevant to AI safety.
I'm not sure how much it matters if people oppose AI due to generic wealth/power concentration concerns vs technical AI safety arguments. If complaining about woke AI creates headaches for AI companies, maybe that's valuable.
>Nathaniel's argument is doing the exact opposite though.
I wasn't trying to respond directly to Nathaniel's argument. Like I said, by default everything should be on the table. We should be brainstorming in all directions instead of getting fixated on particular proposals.
>your examples were using left-coded rhetoric (e.g. "protecting women's spaces") to pass conservative policies (e.g. strict sex segregation and hate campaigns against gender and sexual minorities).
There's nothing inherent to a policy that makes it either conservative or liberal. For example, one could argue that immigration restriction is a liberal policy since it preserves worker's wages, or immigration restriction is a conservative policy since it preserves a nation's culture. But liberals also care about culture, and conservatives also care about wages. Both liberals and conservatives have opposed immigration at various times. Like immigration, AI represents a new source of culture (e.g. I imagine conservatives hate AI boyfriends) and a new source of labor (putting downward pressure on worker's wages). Opposition could be a bipartisan issue.
Tbf, the AI Pause movement is skeptical about the ‘astronomical waste’ argument, and sees the ‘failing to conquer the galaxy ASAP is as bad as going extinct’ idea as a key reason why mainstream AI safetyists are still so reckless and ineffective.
Concerns about AI are only serious if they reflect what it actually does. The AI people are building that causes large waves of hype predicts future words. It can generate streams of images and words, which makes it good at mass generation of images, translations, and predictably written text. It is a questionably capable research assistant that produces frequently erroneous answers to scientific and engineering questions. It can score very high on IQ tests, and also shits the bed when put behind the wheel of a car.
From this one can obviously determine that the AI doomsday scenarios are cooked up by people who treat science fiction like 'woke' people treat The Handmaid's Tale. It comes across as delusional and crazy (because it is) and will not win significant elite or commonplace support because there is no money in it. It may become popular if AI reveals itself to be a bubble, just because no one will want to stick up for AI. But the threat of job loss is much more impactful.
Sorry to be the contrarian lib, but I mean this question in good faith: what happens if the Coalition for AI Safety triumphs in the West, and the resulting regulation slows down AI research in the US, and China beats us to the singularity?
Even if you are very cynical about capitalism/liberal democracy/The Establishment, you must think that CCP overlords are *at least* as bad as, like, Peter Thiel types. And bear in mind that China is not some quasi-functional oligarchy anymore, but a braindead dictatorship under Xi.
I think of an AI revolution (in a non-extinction scenario) as being like the Second Agricultural Revolution - very disruptive and destructive but Pareto efficient, and probably good, in the long (perhaps very long) term.
To be less tongue-in-cheek; an out-of-control paperclip maximizer only needs to be invented *once* to turn all of humanity into paperclips. Stopping Sam Altman from creating a paperclip maximizer in California does humanity no good if another country makes their own paperclip maximizer.
I think there is a narrow path in which the Safety and Acceleration camps compromise, where the latter can freely "accelerate" in exchange for whatever concessions the former demands, like controls on extreme & permanent economic inequality. But this "golden path" seems much less likely than one of these two camps winning and the other just losing - and it isn't obvious to me that the team AI Safety winning in the US, and only the US, will do humanity any good.
Appreciate this reply - this is very interesting stuff, and also very encouraging. I think staving off AI-caused destruction induced by intense geopolitical competition will involve some kind of "digital open skies treaty." I'm not at all a tech person, so I have no idea if/how that could/would work, but nevertheless this is interesting stuff.
> Even if you are very cynical about capitalism/liberal democracy/The Establishment, you must think that CCP overlords are *at least* as bad as, like, Peter Thiel types.
Actually, no, I don't. Not even remotely. It's more like
the collective will of the NYSE < peter thiel < chinese nationalists ~ american nationalists ~ sam altman < Xi ~ pre-brainworms Biden < the chinese new left ~ american new dealers, if any still survive.
Contrary to internet consensus, Xi is the head of the *centrist* faction of the CCP, to the extent that it can cleanly be cut up like that; not my preferred choice, obviously, but he's keeping far worse things at bay.
The issue isn't Xi's ideological tendencies, it's the way his leadership has transformed the Party as an institution. The corruption purges he opened his premiership with have brought about (1) a chilling effect within intra-Party discourse and (2) an extreme case of Tall Poppy Syndrome, which is especially concentrated toward the younger cohorts in the Party. So I'm not concerned about Xi being a mArxIsT or whatever, I'm concerned about the ruling aristocracy of the largest nation of earth being functionally braindead.
In the US, high education / human capital polarization contributes to a situation where every 4-8 years we rotate between a government that can, possibly, make necessary reforms and a government like we have now, which, well... yeah. China's problem is more severe in the *long-term* because of point (2) above; when Xi retires or dies, there are no rising stars to fill his shoes. In the short-run, if Xi decides to blow up bilateral ties with the US by pulling the trigger on Taiwan... who is going to stop him? Who in the Party will question him?
Part 11 is near my expertise: for context im a grad student at MIT who works on hardware/software codesign.
Chip controls work. Deepseek trained on gpus chinese companies no longer can legally acquire, and they cited gpu acquisition as their biggest bottleneck. The problem is that gpu controls only delay Chinese ai companies by a matter of months.
My current worry is that the US is preparing for a world where an advantage of months segways into a decisive longterm edge. This does not seem like a world I want to live in.
To point 8, I may be overly hopeful but I think the 'crunchy con' religious right are natural allies against AI. To the extent I'm a conservative it's mostly that I'm dedicated to the proposition of classical education, and afaict the sentiment in that community is hugely skeptical of AI. It takes away the work of learning. I think humanities people in general have this worry but it's important for avoiding left-right polarization because the classical ed movement is a lot of religious and conservative humanities people.
And, again probably because I'm myself religious and in that classical ed community, it did go against my intuition that you describe AI becoming polarized as a left-right issue with *the right* as the pro-AI side. But yes, the tech right is a thing now and Richard John Neuhaus conservatism doesn't have anyone's ear anymore.
I get what you're saying, but at the same time, before COVID became polarized I also wouldn't have pegged conservatives for "cavalier about disease, particularly disease of foreign origin", and the initial narrative from the left was that COVID was NBD and worrying about it was racist (in an uncomfortable similarity to the initial left narrative on AI risk). But what I then saw (from my perspective as a leftist) is that the left got with the program as soon as the threat of COVID became too obvious to ignore, while the entire other half of the country spiraled deeper and deeper into denial for the sole reason (as far as I can tell) that it would be inconvenient for Donald Trump personally if COVID was a crisis. Which is, scarily, exactly what I see happening this time around as well.
Of course, I'm pretty distant on the social graph from any conservatives, so I'm sure my impression of the way events unfolded is really limited. I'd really appreciate your perspective and to hear how it differs from mine!
I think it's possible to see a more optimistic narrative for AI safety here. For starters, the real honest to goodness anti-vaxxers seem pretty likely to be anti-AI too. At least that would be the analogous position to the one they took during Covid- opposition to the ballyhooed technological development that's being touted as a solution for all manner of problems. (And perhaps it's relevant here that the antivax base actually bent Donald Trump to their will, rather than the other way round).
The problem is that the AI development race isn't likely to include any intrusive individual coercion like vaccine mandates that would activate these people.
> And, again probably because I'm myself religious and in that classical ed community, it did go against my intuition that you describe AI becoming polarized as a left-right issue with *the right* as the pro-AI side.
Yeah, this is a major obstacle in assembling a coalition against the californian ideology. The left - from which I stubbornly insist on excluding radlibs - for which politics is downstream of economics, struggles to see fusionist politics as anything other than the rich pulling one over on the hicks yet again - which, even if correct, is not the right mindset for political persuasion. The cultural right, meanwhile, thinks "politics is downstream from culture", and struggles to see different groups of educated urbanites as actually *different* at all.
I am overgeneralizing, of course, but these tendencies are real nonetheless.
Yeah I see what you mean. To me (intuitively) Elon Musk is definitely not 'the right' because 'the right' (intuitively) is 'conservatives' and 'conservatives' (intuitively) is a bunch of religious families who care a lot about liturgy and the Western canon and tried to get ballot status for the American Solidarity Party. Is this a product of fusionism letting everyone think that conservatism at large is the part of it they're invested in and care about? Maybe?
I'm not really sure where such people are to go, the right now increasingly Nietzschean and the left maintaining its excommunication of all pro-lifers.
Frankly, speaking as someone very much pro-choice, stop looking to other factions to tell you you're invited. You can either try the conventional fusionist strategy, which in all fairness has not worked out terribly if abortion is the only thing you care about, you can try to form a coalition with the most "class-reductionist" elements of the left (to borrow the enemy's language; I would just call them *left*), or you can go on crying alone in the wilderness, but whatever you choose to do it is *your choice*.
Is that my liberal individualism speaking? Yeah, probably. But then again, the left has always promised a synthesis of universalist christian charity and the enlightenment project, not a negation of either.
With most religious conservatives that I've met, the main issue to make them understand the dangers of AI was that they didn't believe in AI capabilities (with the usual moving goal posts of "this is not real intelligence). This will naturally change as capabilities become ever more obvious.
Also the Pope has been rather good on that subject so that should at least help with Catholics (right now Francis is very unpopular with conservative Catholics but he is old and it's highly likely his successor will be just as anti-AI).
To your first paragraph, I agree. That's been my experience too. But many people who for theological reasons will balk at the idea of artificial intelligence are nonetheless very persuadable by discussion of the social effects of AI. This is why I brought up the classical ed people. If you try to make the most extreme x risk cases to them you'll bog down trying to prove that human engineering could create a 'real' intelligence. But if you talk about the much more imminent effects and risks--rampant cheating with LLMs, AI chatbots offering an alternative to relationship formation, the risk of mass disemployment, etc.--you'll find very ready agreement that AI ought to be heavily restricted.
tell me Nathaniel, with your conservatism as a religious frame are you, as I suspect, is it more Protestant as relationship with Christ and believing in belief (Fideism) , or Catholic where everyone has their place in obedience to the church, in which Fideism is a heresy (an individualist heresy).
I ask because WEIRDness is regarded as a result of the Protestant movement from which libertarian California individualism is a direct successor.
Would you be okay with non-Protestant restrictions on AI that impinge on that tradition.
<thinks> must look up the Butlerian Jihad to see where it sits on this dimension</thinks>
I'm a Catholic, actually, and I've no particular commitment to the values of the Unintended Reformation that created the world we inhabit now. Butlerian Jihad? Sign me up.
This just remind me of the past half a century of attempts at environmentalist outreach to conservatives by arguing they should be natural allies to the environmental movement because it's all about conserving nature, with approximately nil effect on actual conservative support for green policies. Even in continental Europe where centre-right parties are generally Christian democrats influenced by Catholic social teaching on the subject, environmentalism is still largely associated with the political left. While in the US, well, lol.
Trying to make religious conservatives into AI safetyists is going to be an even more uphill battle when AI safetyists are largely not anti-AI and would consider technological stagnation to be just as bad as AGI-caused human extinction (Bostrom's "astronomical waste"). The end goal of even AI safetyists coming from a Silicon Valley libertarian background like EY is, very much, well, "fully automated luxury gay space communism", so to speak (they themselves will generally call it "post-scarcity" – a term invented by left-wing anarchist Murray Bookchin).
>This just remind me of the past half a century of attempts at environmentalist outreach to conservatives by arguing they should be natural allies to the environmental movement because it's all about conserving nature, with approximately nil effect on actual conservative support for green policies.
Was this something they were saying to themselves, or saying to conservatives? I've never heard this idea before.
I would be wary of generalizing too much from a single example. I made a list on LW of ideas that US conservatives might've adopted from US liberals: https://www.lesswrong.com/posts/FzSSrx6nbmCZRoKkq/ebenezer-dukakis-s-shortform?commentId=oSnrGYMZzhhewYTWP
I presume you're American, so this may not be particularly salient to you as it is to us in culturally Catholic countries with strong Christian democratic parties, but I expect @Nathaniel L to know about that kind of rhetoric if he is a political Catholic. It's very much something not only greens are saying to conservatives, but that specific subsection of conservatives are saying to themselves.
https://en.wikipedia.org/wiki/Green_conservatism
https://en.wikipedia.org/wiki/Religion_and_environmentalism
Even in the US you will see that kind of rhetoric with the whole "conservative conservationist" angle, and the more apartisan NGOs like the Sierra Club using it to bolster a "neither left neither right" image for the environmental movement.
If you can deal with auto subtitles, this is a good overview of the history of the environmental movement in my own country: https://www.youtube.com/watch?v=4XAD3za9pMw
I think both this historical conservative case for environmentalism and @Nathaniel L's conservative case for AI-safetyism being both rooted in political Catholic reverence for conserving nature against technologically induced existential threats makes this a far closer match for a comparison than any of the ideas in your list, which fall either under the category of generic populism ("people who dislike the establishment because it's too right-wing and people who dislike the establishment because it's too left-wing both dislike the establishment"), or right-wingers opportunistically trying to use progressive-sounding rhetoric (which in fact is the exact opposite of what we're talking about here).
>generic populism
If left-wing opposition to AI takes the form of populist rhetoric, such rhetoric could spread to the Republicans, which is a rather populist party right now.
>right-wingers opportunistically trying to use progressive-sounding rhetoric (which in fact is the exact opposite of what we're talking about here).
What are we talking about? I thought we were talking about trying to slow AI through the political system. If that's the case, I think by default all strategies should be on the table. One possible strategy is get progressives to complain about AI, and hope that right-wingers will opportunistically borrow their anti-AI rhetoric.
>If left-wing opposition to AI takes the form of populist rhetoric, such rhetoric could spread to the Republicans, which is a rather populist party right now.
When I say "generic populism" I mean specifically the truism that people who oppose the establishment because they find it too right-wing and people who oppose the establishment because they find it too right-wing both oppose establishment. In your list things like "US federal institutions like the FBI are generally corrupt and need to be dismantled." and "We can't trust elites. They control the media. They're out for themselves rather than ordinary Americans.".
Populist rhetoric in this context would mean accusing AI of serving the establishment. Which. I guess you could say when right-wingers complain about "woke AI" they're doing that with regard to AI ethicist arguments. But that's not relevant to AI safety.
If "left-wing opposition to AI taking the form of populist rhetoric" just mean attacking the establishment for ignoring AI x-risk for financial and ideological reasons, then that is a more specific ideological criticism, and there is little reason to expect it to cross the aisle more than any left-wing populist criticism of corporate misbehavior would.
> What are we talking about? I thought we were talking about trying to slow AI through the political system. If that's the case, I think by default all strategies should be on the table. One possible strategy is get progressives to complain about AI, and hope that right-wingers will opportunistically borrow their anti-AI rhetoric.
Nathaniel's argument is doing the exact opposite though. Using right-coded rhetoric (preservation of divinely ordained nature and classical culture) to pass AI safetyist policies. While your examples were using left-coded rhetoric (e.g. "protecting women's spaces") to pass conservative policies (e.g. strict sex segregation and hate campaigns against gender and sexual minorities).
>When I say "generic populism" I mean specifically the truism that people who oppose the establishment because they find it too right-wing and people who oppose the establishment because they find it too right-wing both oppose establishment.
I'm not sure this is a correct description of populism. Populism could also be generic distrust of self-dealing elites. One factor said to contribute to Trump's popularity is the response to the 2008 financial crisis. Assuming that's actually true, this would represent a case of right-wingers distrusting corporate elites due to misbehavior.
>I guess you could say when right-wingers complain about "woke AI" they're doing that with regard to AI ethicist arguments. But that's not relevant to AI safety.
I'm not sure how much it matters if people oppose AI due to generic wealth/power concentration concerns vs technical AI safety arguments. If complaining about woke AI creates headaches for AI companies, maybe that's valuable.
>Nathaniel's argument is doing the exact opposite though.
I wasn't trying to respond directly to Nathaniel's argument. Like I said, by default everything should be on the table. We should be brainstorming in all directions instead of getting fixated on particular proposals.
>your examples were using left-coded rhetoric (e.g. "protecting women's spaces") to pass conservative policies (e.g. strict sex segregation and hate campaigns against gender and sexual minorities).
There's nothing inherent to a policy that makes it either conservative or liberal. For example, one could argue that immigration restriction is a liberal policy since it preserves worker's wages, or immigration restriction is a conservative policy since it preserves a nation's culture. But liberals also care about culture, and conservatives also care about wages. Both liberals and conservatives have opposed immigration at various times. Like immigration, AI represents a new source of culture (e.g. I imagine conservatives hate AI boyfriends) and a new source of labor (putting downward pressure on worker's wages). Opposition could be a bipartisan issue.
Tbf, the AI Pause movement is skeptical about the ‘astronomical waste’ argument, and sees the ‘failing to conquer the galaxy ASAP is as bad as going extinct’ idea as a key reason why mainstream AI safetyists are still so reckless and ineffective.
"ASAP" is very explicitly not Bostrom's argument.
Concerns about AI are only serious if they reflect what it actually does. The AI people are building that causes large waves of hype predicts future words. It can generate streams of images and words, which makes it good at mass generation of images, translations, and predictably written text. It is a questionably capable research assistant that produces frequently erroneous answers to scientific and engineering questions. It can score very high on IQ tests, and also shits the bed when put behind the wheel of a car.
From this one can obviously determine that the AI doomsday scenarios are cooked up by people who treat science fiction like 'woke' people treat The Handmaid's Tale. It comes across as delusional and crazy (because it is) and will not win significant elite or commonplace support because there is no money in it. It may become popular if AI reveals itself to be a bubble, just because no one will want to stick up for AI. But the threat of job loss is much more impactful.
Sorry to be the contrarian lib, but I mean this question in good faith: what happens if the Coalition for AI Safety triumphs in the West, and the resulting regulation slows down AI research in the US, and China beats us to the singularity?
Even if you are very cynical about capitalism/liberal democracy/The Establishment, you must think that CCP overlords are *at least* as bad as, like, Peter Thiel types. And bear in mind that China is not some quasi-functional oligarchy anymore, but a braindead dictatorship under Xi.
I think of an AI revolution (in a non-extinction scenario) as being like the Second Agricultural Revolution - very disruptive and destructive but Pareto efficient, and probably good, in the long (perhaps very long) term.
Better dead than communist right ?
If China beats us to superintelligent AI, if and because no safetyist movement succeeds there, we'll be dead *and* communist.
To be less tongue-in-cheek; an out-of-control paperclip maximizer only needs to be invented *once* to turn all of humanity into paperclips. Stopping Sam Altman from creating a paperclip maximizer in California does humanity no good if another country makes their own paperclip maximizer.
I think there is a narrow path in which the Safety and Acceleration camps compromise, where the latter can freely "accelerate" in exchange for whatever concessions the former demands, like controls on extreme & permanent economic inequality. But this "golden path" seems much less likely than one of these two camps winning and the other just losing - and it isn't obvious to me that the team AI Safety winning in the US, and only the US, will do humanity any good.
If the Coalition for AI Safety triumphs, I expect the US will try to negotiate a treaty with China. China seems amenable:
https://www.scmp.com/opinion/china-opinion/article/3298281/cooperation-ai-safety-must-transcend-geopolitical-interference
Appreciate this reply - this is very interesting stuff, and also very encouraging. I think staving off AI-caused destruction induced by intense geopolitical competition will involve some kind of "digital open skies treaty." I'm not at all a tech person, so I have no idea if/how that could/would work, but nevertheless this is interesting stuff.
> Even if you are very cynical about capitalism/liberal democracy/The Establishment, you must think that CCP overlords are *at least* as bad as, like, Peter Thiel types.
Actually, no, I don't. Not even remotely. It's more like
the collective will of the NYSE < peter thiel < chinese nationalists ~ american nationalists ~ sam altman < Xi ~ pre-brainworms Biden < the chinese new left ~ american new dealers, if any still survive.
Contrary to internet consensus, Xi is the head of the *centrist* faction of the CCP, to the extent that it can cleanly be cut up like that; not my preferred choice, obviously, but he's keeping far worse things at bay.
The issue isn't Xi's ideological tendencies, it's the way his leadership has transformed the Party as an institution. The corruption purges he opened his premiership with have brought about (1) a chilling effect within intra-Party discourse and (2) an extreme case of Tall Poppy Syndrome, which is especially concentrated toward the younger cohorts in the Party. So I'm not concerned about Xi being a mArxIsT or whatever, I'm concerned about the ruling aristocracy of the largest nation of earth being functionally braindead.
In the US, high education / human capital polarization contributes to a situation where every 4-8 years we rotate between a government that can, possibly, make necessary reforms and a government like we have now, which, well... yeah. China's problem is more severe in the *long-term* because of point (2) above; when Xi retires or dies, there are no rising stars to fill his shoes. In the short-run, if Xi decides to blow up bilateral ties with the US by pulling the trigger on Taiwan... who is going to stop him? Who in the Party will question him?