Introduction
In the previous piece on this blog, I argued that the SFX crash is a good opportunity for effective altruism (and longtermism, and other related movements) to reevaluate their relationship with power. I offered a deliberately provocative (and tendentious and tenuous!) analogy to another group that, having seen reason as a way to improve the world, eventually split over the question of their relationship to power, the right and left Hegelians.
Now, I want to lay an argument that there is a space, conceptual and political, for left rationalism. To a certain extent, the idea is already in the water, for example, I moderate this subreddit for people interested in left-wing politics, rationalism, longtermism and related:
https://www.reddit.com/r/leftrationalism/
But I want to outline the roles such a movement could play, roles that I think give it an important identity.
The scope of rationalism in this piece
Rationalism here is meant in its broadest possible remit, and includes but is not limited to:
Blogosphere rationalism (including e.g. Lesswrong)
Effective altruism
Longtermism
AI safety research
Without getting into the historiography of blogging, as I understand it, rationalism was originally the idea that you, as an individual, could become very good at thinking through the study of cognitive biases, reasoning skills, etc. The end goal was to try and be right about as many things as possible, with a view that doing so would give power to achieve things. It was a self-help program that had a number of unusual features- e.g., not trying to scam adherents for money.
This original goal has, I think, somewhat dissipated. People have gradually been forced to face the truth that domain expertise is really, really important. Much of the research on cognitive biases is, rightly or wrongly, perceived as having taken a hit from the replication crisis. It is thus, for better or worse, now held that studying thinking in the abstract won’t have a huge impact on your ability to think, at least after some fairly low-hanging fruit, such as probability & statistics, is plucked.
I think that nowadays rationalism is defined more by:
A scope of interests (AI, new technology, cognitive psychology & neuroscience etc.)
A methodology of broad-ranging inquiry, more speculative than academic work, more rigorous than an op-ed.
A culture with institutions and mores.
A segment of a social class with a (complex and often contradictory) material basis
A specific network of individuals, microcelebrities etc.
These points- most especially the first two- contain the blueprint of a useful intellectual subculture.
My two cents is that it’s probably a good thing that the goal of being individually right through super rationality dissipated because this should not be our object in the process of inquiry. As I have argued in the past, our goal should not be to arrive at the truth ourselves, in an atomistic way, but to help society and institutions arrive at the truth. Patterns of thought that help society reach the truth but may not help an individual reach the truth include individuals taking on positions and advocating them to the hilt as part of a dialectical process of collaborative truth-seeking.
What should the left make of a ‘left rationalist’ project?
This mostly depends on what you think of rationalism generally. Some people think it’s the literal devil. I’ve never really understood that. The median rationalist is to the left of the median American for sure on both ‘social’ and ‘economic’ issues, so if we don’t despise the median American, why should we despise the median rationalist? (and if you do despise the median American and you’re trying to do politics in America, lol, good luck with your boutique politics that are going nowhere.)
I think there’s a tendency on the left to accept two kinds of people:
People with basically similar politics to ours.
People who keep their mouths shut about politics.
But I’ve never understood this at all. If the people in category 2 suddenly all chose to speak about their political views, mostly they’d end up shunned as well. What difference does not talking about your political beliefs make? Maybe a little, because it signifies confidence, but it seems to me that excusing people their political beliefs simply because they don’t talk about them in public forums is a form of condescension- treating them as if they were not real political agents- as if they were, in the words of the right, NPCs.
We need to learn to chill out and to tolerate disagreement even on really big issues. This is not because these issues don’t matter, but because the alternative to chilling out is viewing so many people as despicable that you’re never going to be able to do useful political work if you apply your own attitudes consistently.
Do you really think, that if you cracked open, for example, Scott Alexander’s head and looked at the beliefs therein, they’d be more terrifying than what, for example, the median Arizonan believes?
In general, a program of positive engagement with subcultures that too much left orthodoxy condemns as politically backwards is both my practice and recommendation. We should all be slower to judge people, more open to listening, and more open to ideas. We should be willing to search for allies in all places.
There have, of late, been some people raising the alarm bells about longtermism, EA etc. lately, especially in light of McAskill’s book and the FTX fraud. A lot of it is grounded, a lot of it is not grounded. Here’s a good example of both:
Liam is right that Peter Thiel was previously associated with EA and Longtermism, what Liam has left out is that he exited it because it was incompatible with his rightwing accelerationism. I don’t think inviting Peter Thiel to give a (keynote!) talk at a conference even in 2013 is a good thing by any means but a lot of these people in EA, longtermism etc. are very politically naïve and don’t tend to think this way, especially back in 2013.
Always keep this in mind when assessing EA in particular: there’s a reason why there are people spending 10 to 50% of their income on buying mosquito nets for poor countries, and it’s not malice.
EA, longtermism, all of these, are, in my experience politically complex. They do not have an easy left-right valence. There is a possible world where EA becomes what its critics have accused it of- a justification for current injustices using just-so stories about how this will lead to future bliss. There is a possible world where things turn out better than that. At present, it contains the elements of both futures, and which way it goes is up to us, although the exact convolutions and formations can’t be foreseen, all we can do is engage with openness.
Is there room for left rationalism? The unique value argument and the presence of a constituency
Why would anyone doubt the viability of left rationalism from the rationalist side? The best argument I can see against it is this. Of course, a rationalist can be leftwing and of course, a cross-country skier can be a knitter, but just as no one would rush out to found the society of cross-country skiing knitters, no one would rush out to found the society of leftwing rationalists. The intersection doesn’t create any unique value, and if anything it maybe destroys a little. Aren’t we meant to approach things without preconceptions?? Alternatively, insofar as rationalism is a network, a culture and a class, isn’t that really existing thing aligned in other directions?
I will argue in this piece that there is a set of resources, skills, and interests in rationalism that mean a leftwing rationalist project has something to contribute of unique value?
Let me start by establishing that there is a will and constituency for it. Consider the 2020 Slate Star Codex reader survey, and in particular the political spectrum question- a scale of 1 to 10 (1 being most left, 10 being most right) with 7820 responses. The overall pattern of responses was undeniably tilted towards the left, the mode response was 3, the median was 4. 163 people identified as Marxists. Granted, this is a small percentage (probably comparable to the general population), but it is a large enough nucleus to start a movement. 35%+ of respondents identified as either Marxists or as social democrats, where a social Democrat was defined as:
“Social democratic, for example, Scandinavian countries: heavily-regulated market economy, cradle-to-grave social safety net, socially permissive multiculturalism”
The organic intellectual
The first role that I think rationalism can play, we might cheekily call the role of the organic intellectual because I think it has some kinship with Gramsci’s concept- or at least the intellectualism outside of intelligentsia part. Left rationalism, as a subculture, would fill an intellectual niche that the left has need of:
There has always been a desperate need for people who are somewhere in between the newspaper columnist level of intellectual engagement and scope, and the academic level of engagement and scope. We need people to play around with big ideas take them seriously and do so with a spirit of openness to the possibility they are wrong. We need informed, rigorous, speculators. Trying to take on an academics standards while working across a huge scope of multidisciplinary ideas is, of course, impossible but the attempt is very important. This is what I have always appreciated from good rationalist writing. At least at its best, it seems to hit a golden mean between too academic to say anything, and too unserious to be worth paying attention to.
The desperate need for this kind of writing is more intense on the left than just about anywhere else. The left needs to theorize other ways of living and theorize new modes of struggle to create those other modes of living. Many popular contemporary left figures- e.g. Mark Fisher, are popular precisely because they’ve hit this middle ground of informed speculation. As many academic leftists have noted, academic writing is sometimes poorly suited to the tasks of utopian imagining and strategic speculation due to disciplinary narrowness, and (perfectly reasonable) constraints against ‘feral thought’. My hope is that left rationalism could fill this space.
This is the kind of writing I try to do. Writing that tackles big questions, but doesn’t pretend to any kind of authority. Writing that starts needful conversations. That makes us aware of blindspots, rather than confidently filling them in. I’m sure fail sometimes. I can only hope I succeed often enough to make the attempt worth it.
The current ways the left- and the rest of the political spectrum- deals with this gap between op-ed speculation and academic writing is very unfortunate. Here are some approaches:
Adopt the journalistic mode- just say whatever you like in ranting speculation! Of course, I’ve been guilty of this sometimes- we all have- but it sometimes feels like there isn’t even a norm against it in political writing.
Canonize a select group of people who are allowed to have ‘big’ ideas on the left, some of whom aren’t even leftists! (E.g., Lacan, Heidegger, Freud, Nietzsche… ). Other people are expected to present their own big ideas behind the veil of being interpretations of one of the canonized. Of course, one sees non-leftwing versions of this sort of thing- eminences that are permitted a wide scope whereas everyone else is expected to specialise.
Is a DIY intellectualism that strives for rigor possible? I don’t know, but let’s find out together.
Aside: What about the other part of Gramsci’s organic intellectual concept? Not just outside intelligentsia but inside a class? When you look at how rationalism often emerges from the experience of working in some kind of IT or STEM field, I think it kind of fits.
A fork in the California ideology
If we consider the ideology of the tech industry it’s clear that it has a dark side. That dark side is the false promise of ‘transformation without transformation’- the creation of technological ‘utopia’ purely through technical and not political means. Endless networked liberal subjects turned into free market entrepreneurs selling themselves, property rights extended over an endless digital terrain- that sort of thing. Tech hype without a critique of capitalism, or even a critique of the sins of capitalism as it exists now.
We might think of this kind of dead end as the call for a voluntary transformation of an involuntary world- a call that ignores the power dynamics, political and economic, that run things as they stand. As long as the world is carved up into fiefdoms held up in private control, backed up by armies and police, there are limits to the changes that instruments like social entrepreneurship or universal awareness can create.
But there is another kind of tech idealism, one that sees technology not as inherently transformative, but potentially transformative, as holding the tools that could make a better world, if only the right political actors existed, and would move themselves from their slumber. Marx himself was this kind of enthusiast about technology, from the Grundrisse:
They are organs of the human brain, created by the human hand; the power of knowledge, objectified. The development of fixed capital indicates to what degree general social knowledge has become a direct force of production, and to what degree, hence, the conditions of the process of social life itself have come under the control of the general intellect and been transformed in accordance with it
After an initial burst of enthusiasm in the wake of the Arab Spring, the dominant mood on the left changed to tech pessimism. It became clear that the networks afforded by, e.g., Twitter, were not, in and of themselves going to change things. I don’t dispute this verdict, but technology is not neutral, nor is it inherently negative. We certainly need to spend less time online and more time face-to-face, but we also need to recognize that technology sets the terrain of struggle. We need to engage with the technologies of our world critically and openly. A left rationalist project seems well-positioned to do so.
Engaging with, and theorising, the social possibilities of existing technology is perhaps the first task of any would be left rationalist movement. Later on, I’ll discuss a related task- theorizing the technologies of the near future. In both cases our goal is real- to restore the link between technical and social progress.
Thinking about new technology in the context of political power
We’ve talked about engaging with existing technologies, but there’s also the task of engaging with future technologies for left rationalists.
Right now, there’s a lot of research going on into mind reading. Consider this paper on BioArvix that is currently doing the rounds:
Here we introduce a non-invasive decoder that reconstructs continuous natural language from cortical representations of semantic meaning recorded using functional magnetic resonance imaging (fMRI). Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech, and even silent videos, demonstrating that a single language decoder can be applied to a range of semantic tasks. To study how language is represented across the brain, we tested the decoder on different cortical networks, and found that natural language can be separately decoded from multiple cortical networks in each hemisphere.
We need critical inquiry into the implications of this sort of thing. Should we be trying to delay the development of this technology for as long as possible? Demanding a strong legal framework to control it? Does it afford possible opportunities as well as risks (e.g. some sort of sousveillance of the powerful)? Left rationalism would be a good candidate to generate this kind of critical analysis. Granted, rationalism in general has a bias towards over-enthusiasm about new technology. But it’s a good place to start talking.
Economic and political design
Part of the task I would set for left rationalism is in the words of Erik Olin Wright, “envisioning real utopias”. Thinking about other possible forms of economic and political configuration. After Marx, the left has always had a strong streak of contempt toward utopianism. This is, I think, a mistake, or at least we can say that it has been taken too far. Imagining utopias alone won’t change anything, and trying to build utopias in miniature is fraught with all sorts of problems. Yet we need utopias, to inspire, to experiment and to contrast. Moreover, we need what we might call topias- possible arrangements of things that are closer and more accessible in possibility space, that we can implement where we hold some degree of power.
This kind of utopian imagination has always scintillated through rationalism. It’s the kind of subject that works very well as an intermediate form of writing as discussed above, a bit broader and a bit less rigorous than academic writing, more rigorous than an op-ed. Now of course, I hope that blueprints for other ways of being would be a bit more developed, formal and researched than a blog post, but it is perfectly appropriate to begin here.
The theory and practice of persuasion and organization building
Building on rationalists strengths in persuasion and organization building would be a great place to begin a left-rationalist project.
There is a great need for the left to 1) get better at making ourselves talk to people, 2) get better at creating organizations. The left used to be good at both of these, but it’s skill has declined. To the extent that new people join the left, it is almost more that their situation makes a mockery of any possible alternative than that they are affirmatively persuaded.
I admire, greatly, the rationalist zeal for proselytism and the rationalist zeal for starting organizations. I don’t know whether rationalists have been successful at talking to people (nor am I saying they’ve been unsuccessful!) But dear god, they try. The very fact that they have grown so fast, seemingly of a base of talking to people, would tend to suggest that they have had some success in speaking with others about their ideas. I admire also how good they are at creating organizations with an ontological inertia greater than one or two people. Persistent institutions- we need these.
Effective political altruism
Another use for left-activism. Effective altruist methodologies might be very useful in evaluating leftwing charities. There is a need for analysis and identification of how to give effectively to political causes. Politically oriented charities are notoriously boutique and hard to evaluate for quality. Even if we knew how efficiently particular political charities operated, we’d still no idea of the best way to use a limited budget to contribute to your political goals efficiently.
There is also a need for a political critique of existing work in effective altruism- an examination of the political assumptions- normative and empirical- which undergird it. These assumptions can create severe risks, as we arguably saw recently in relation to SBF and FTX.
Thinking about AI risk in the context of political power
What about the AI risk arm of rationalism? Artificial intelligence presents a number of risks. The two most commonly discussed risks are:
Existential risk. AI may represent a risk to the survival of humanity.
Bias risk. AI may reflect, perhaps deliberately, but perhaps unintentionally, dominant power structures and biases.
To this list, I would like to add a third category:
Domination risks. The risk that individuals and or groups might gain vast political power through the use of AI- e.g., forming an eternal, unoverthrowable singleton.
3, despite being a pretty obvious possibility, is undertheorised. A lot of people will downplay the importance of 3. If 3 happens then at least we have survived to the future, they argue. There’s some truth in this, but as I have noted on this blog previously, what we want in the future is countless flourishing human or quasi-human lives, and ideas about what constitutes “flourishing” and “human” vary wildly. Further, the kind of individuals who are likely to try and set themselves up in a singleton might be predisposed to have nasty ideas. Domination risks, if a really bad egg was in charge, could even be worse than extinction.
Generally speaking, left rationalism could help focus more effort on the problems of domination risks in addition to existential risks. If I may also (humbly) give my opinion on a matter of widespread discussion- the ongoing debate over whether existential risk or bias risk is the ‘real’ question of AI safety research is unconstructive. Partisans of both sides have behaved in very distasteful ways. I’ve read awful takes about how worrying about AI eating us proves you’re a sexist white supremacist, and I’ve read awful takes about how worrying about race and gender bias in AI is a waste of resources. It’s not a competition! There’s a way of playing this out where these forms of inquiry become mutually supportive, in intellectual and sociological terms.
Overall, I think that a leftwing approach could better connect these three separate but linked problems- and the separate research programs of AI safety and algorithmic justice, than the broadly liberal and/or libertarian philosophies that now fill the area.
Edit: Scott Aaronson’s “Reform AI alignment” which I found just after I’d written this section, has a superb discussion about a lot of these issues. It even prophesizes our coming:
I’ll leave the formation of a Conservative branch of AI alignment, which reacts against the Reform branch by moving slightly back in the direction of the Orthodox branch, as a problem for the future — to say nothing of Reconstructionist or Marxist branches.
He emphasizes the need to attend to other kinds of threats to humanity, the need to be mindful of the possibility of human bad actors usurping AI, the importance of public outreach, the importance of looking at the alignment of existing systems, the need to not let AI alignment be used as a basis for authoritarianism etc.
I will say I have reservations about some of it, in particular, point 8 of his essay may be true, but it may also be too optimistic. In general, Scott is right to point to other possible ways AI could fuck us up, and right to call for a reform AI alignment studies, but I think it’s very important to emphasize the Orthodox could be right about any or all of these issues, we just don’t know and need to make best guesses on the basis of a balanced fox-not-hedgehog assessment of the situation.
Reaching out to an ambiguous social-stratum
The left needs to reach everyone better, and we all have different geographies, social strata etc. who we can reach. A leftwing rationalist project could make the a case for leftwing ideas to the social strata and political currents rationalism is currently associated with.
Those skeptical of both liberalism’s hypocrisies and conservatism’s bloodlust.
Tech workers, whose class position is complex. They are workers, but they do earn a lot of money, and many have the means to take up a petit-bourgeoise consultancy lifestyle should they choose to- or dreams of creating a startup.
People who have been sold the idea that not being conservative involves a cloying kind of identitiarianism- sold that idea by both conservatives and liberals- and who are now looking for alternatives.
A softer-hearted, harder-headed left
Finally, spend any time on Twitter and you’ll conclude that the left there needs a softer heart and a harder head, more ruthless in intellectual inquiry, more kind in speech. I admire the generally low(ish) level of nastiness among rationalists, and while their heads aren’t nearly so hard as they think at least they’re trying.
Regarding risks existential, bias, and domination, I believe 3 deserves not just more of our attention, but most of it. 1 is just too hard, we'll most likely have to get extremely lucky and end up in a universe where it just isn't that hard to make an aligned superintelligence. Next most likely thing that saves us is coordinating well enough to buy a whole lot of time, followed by incredible theoretical advances in a short amount of time. Neither of these is probably happening but I believe there's a non negligible chance of getting lucky. 2 could be bad but the magnitude of harm is dwarfed by the other two. 3, as pointed out in the essay, includes the possibility of near infinite negative utility, horrors not just unprecedented but beyond understanding. And while not easy it seems much easier to attack than alignment. Considering it's conditional on 1 being a non issue, it might be as easy as making sure the most powerful AIs we create are under some reasonable form of democratic control, or are independent of human control and non awful.
yo philosophy bear , spd here, love your content and your way of writing, been subbed to u for a quite a while , i recently started a substack , so wud love your suggestions on how to start as beginners and on how to expand my newsletter.