Recently, there’s been a very public argument about longtermism the view that we should focus large resources on humanity’s long-term future. Longtermism often focuses on trying to survive to a point at which we can transform vast quantities of matter and energy into computers and create endless quintillion simulated beings living lives of pure bliss.
Based on very reasonable consequentialist premises, longtermism seems like an inescapable conclusion- but what follows from this? Well, some have argued a variety of disturbing implications follow, ranging from, but not limited to:
Political authoritarianism: We need to buckle down and avoid dangers like the creation of uncontrolled AI. The best way to do this, think some, is through political authoritarianism.
Preferring to save the lives of rich people over poor people: Rich people are more economically productive and are therefore more likely to help us reach a good future more quickly, and hence, perhaps, more surely.
At the margin (and a fair bit back from the margin!) placing little weight on “ordinary” charitable causes, from malaria nets to wetland preservation and “ordinary” political causes, including leftwing political causes that I and most of the readers of the blog support.
To be clear, few longtermists have endorsed these views, but many critics of longtermism have suggested these ideas flow from longtermist premises, or that various adherents of longtermism secretly hold these views. Generally, I am not persuaded that these views are covertly embedded in actual longtermism to any very significant degree, but claims have been made on the basis of varying amounts of evidence, which the reader is free to investigate for themself.
I will accept, in arguendo (but very much only in arguendo) that all these steps would make us more likely to reach a future in which we create countless simulated beings.
I will call views in this vicinity dark-enlightenment longtermism or neo-reactionary longtermism (NRLT) or sometimes “dark longtermism” because I think its political flavor is very similar to neo-reaction. The similarity? Both are a series of putative “dark and unfortunate truths”, broadly anti-democratic and anti-humanist, about what our situation requires us to do.
I want to be clear that not all longtermists believe this stuff. Many longtermists have been unfairly tarred with a dark-enlightenment/neo-reactionary brush that doesn’t describe their politics. There have been some very unfair things written on this very topic. Personally, I’m much more interested in building alliances than hunting for heretics, and I don’t attribute nasty positions to people without very good evidence- typically that person saying, in plain words: “I believe this nasty thing”. What I am writing here is not a hitjob on “longtermism” as such, which in many ways I support, but an attempt to argue that longtermism, properly understood, does not imply dark longtermism.
Most of us- whether supporters of dark-longtermism or not- don’t just want the future to be filled with blissed-out utility bags. We don’t want, for example, infinite blissed-out replicas of a person reliving their happiest moment again and again. We want the future to be filled with flourishing people living meaningful, and complex lives with diverse experiences.
The meaning of every single word there- flourish, person, life, meaning, diverse, complex, experience, is up for debate and who wins that debate will be determined, at least in part, by what values we instantiate now. The kind of civilization that discards the weakest in a rush to get to the end of history will, after that end, be unlikely to create a utopia using the kind of values that, in my view, it should.
I think the utopia Peter Thiel would build would be different to the utopia that I would build. To be sure, there’s a good chance that Peter Thiel’s utopia would have positive ethical value to me, so long as it has human-like entities and those human-like entities aren’t in abject misery. But that ethical value would be less than the ethical value of my preferred utopia, and less than the expected value of a utopia reached by a broadly democratic and humanistic process where everyone matters and we don’t stop buying malaria nets and fighting poverty and oppression.
But if we embrace dark longtermism it seems to me that, in expectation, we would be pushing the world towards the Peter Thiel utopia and away from my preferred utopia. Hence it’s not at all clear to me that it’s in my ethical interests to embrace dark longtermism. Granted, the available options aren’t getting exactly what I want or getting exactly what Peter Thiel wants, but I think I’m more likely to get more of what I want, in expectation, by not joining the dark longtermists.
All this talk about different utopias might seem vague. What possible differences could there be about how a utopia post-scarcity should be structured, at the level of fundamental value differences? Many. For example, some people would hold that human life has more value when it is integrated into hierarchies of status and power- even if those no longer serve a purpose beyond being an end in themselves. Many are those who would want to restrict sex and relationships between consenting adults. Many are those who would want to set up their religion as compulsory for all eternity. Many are those who might want to inject non-voluntary suffering to add what they see as meaning [*] or even artificially create a “struggle for existence”.
We probably can’t anticipate most of the value problems of the future. I do know though, for sure, that what I want is a warm, democratic[**] humanism making the calls on those value problems rather than a dictator, or an oligarchy, of authoritarians.
Footnotes:
[*] Tbh, I think that under certain very specific circumstances non-voluntary suffering might be warranted, but that’s a special case I won’t talk about here, here I’m talking about the insertion of non-voluntary suffering at points I find inappropriate.
The very specific circumstances where non-voluntary suffering might be appropriate are to do with the creation of as many kinds of being as possible. See this essay, which also might cast some light on the live value questions that remain in utopia:
[**]- To be clear, this is not because I place any intrinsic value on democracy- necessarily- although maybe I do, I go back and forth on this- but because democracy is the best feasible option from the point of view of implementing my ethical preferences.
Are you sure dark longtermism exists for more than a trivial number of people?
I've never heard anything like these views expressed, and my sense is that people like Thiel, or people with religious leanings, would reject longtermism outright.
The closest I've heard to what you're describing is Curtis Yarvin's pitch about why effective altruism is a waste of time - but I interpreted it as him saying it's a waste of time, not him trying to promote his own flavour of effective altruism
In stead of Nick Land's "Dark Longtermism", Yarvin's "Grey Longtermism" seems more plausible, with three being modified into: multi-polar monarchies (AI as a non-issue as organizational dominance suffers from the same problem, decentralization good), class-based planning ("Solus Populi Suprema Lex"), near future first initiatives (establish charity chain-of-causes to assess effectiveness).
The major issue is that "Left Longtermism" and "Dark Longtermism" both suffers from lack of unified sense of best-case, worse-case, and average-case scenarios, and that there should be mediation factors that tips the scales way or the other. What "bipartisan" bridges can be built then?