The issue of AI safety becomes political, and eventually leftwing and left-liberal coded. True, the left doesn’t ‘want’ to take on AI safety at the moment, but the ‘effective accelerationist’ opposition to AI safety seems to veer heavily to the right, as seen during the uproar over Altman’s firing. Also, the headlong rush of corporations to make money means that an alliance between corporations and safetyists was unlikely to be sustainable in the long run. In their contention, politics arises. Eventually, a compact is made with the left by AI safetyists. In that compact, three types of concerns about AI are compressed into one log of complaints:
A) AI might kill us all.
B) AI might have various deleterious social effects.
C) AI might enable a kind of unassailable authoritarianism- a potentially eternal suppression of the people by elites
This last point I think has a useful kind of function as a bridging point between the very social-human concerns of the left and the very apocalyptic concerns of the AI safetyists.
The big tent they form is never entirely comfortable, and differences in which of these matters the most persist, but the unifying idea is slowing AI or stopping progress in many areas of AI until certain technological and social requirements can be met.
There is a split in Effective Altruism and similar movements in relation to the corporate world, precipitated by despair at the seeming failure of the Open AI foundation to control the Open AI corporation, the continuing aftereffects of FTX, other implications of being involved with billionaires, etc. A ‘left’ and a ‘right’ of EA form based on how critical their relationship is with capital. Politics returns from exile, as it ultimately always must in matters of great concern.
AI safety itself, having moved broadly leftward, splits into two factions, an accommodationist faction, of broadly centrist tendencies, that wants to work closely with existing organizations like OpenAI, and a leftwing coded faction that wants to regulate them externally. Various increments exist along the spectrum of possible positions between these, with more externality and more “pause AI now” type rhetoric associated with being more leftwing.
This is all quite reflective of a common pattern where politics adjacent but relatively “““apolitical””” phenomena are eventually pushed to take a side on political questions. I’ve seen many micropolitical cases of this, but for a macropolitical example see the history of climate change.
What I have written here is a prediction, and certainly not a celebration. My sense is that all of this is a disaster. I myself am a socialist, and I do think EA should be broadly a little more cautious and critical around power (FTX, Peter Thiel, etc.). You might think that to me, a faction of EA turning leftist would seem like a good thing. Certainly, I would rejoice in the influx of smart people with a lot of interesting technical competencies to the left. but a lot of the value EA has provided- from mosquito nets to AI governance, has arisen from its relative independence.
I don’t see a way to avoid the splits described at this point- to be clear that doesn’t mean I’m confident in my prediction- predictions, especially about the future, are often wrong. I just mean if my prediction is right I don’t see how it can be changed.
I'm curious if you have any predictions about open-source LLMs, like LLaMA. I've noticed an outsize demand for LLaMA developers on Upwork.
>AI might enable a kind of unassailable authoritarianism- a potentially eternal suppression of the people by elites This last point I think has a useful kind of function as a bridging point between the very social-human concerns of the left and the very apocalyptic concerns of the AI safetyists.
This point is flawed, insofar as safety-from-AI and safety-from-other-humans-monopolizing-AI-against-you are both x-risks and their solutions mutually contradictory. Safety from AI requires extensively restricting access and commands given to the AI to avoid anything it could possibly misinterpret, while safety from an AI monopoly requires giving everyone access to AI so nobody can monopolize it.