Longtermism versus the poor and vulnerable? Or, how we can rise to the height of our halo.
Why longtermism isn't a sophisticated excuse for not helping the poor
I went to a seminar criticizing longtermism yesterday. My sense of the discussion afterward was that many philosophers there found longtermism disturbing, but couldn’t quite put their finger on what was wrong with it. The attendees seemed unhappy with the standard responses like discounting the future, just because it is the future, but also seemed to dislike longtermism as commonly received.
In some areas, the academic response to longtermism has showcased how motivated reasoning can lead to bad conclusions. A few quite dastardly people supported longtermism (no insult intended to the rest) it became very fashionable to bash it in the Atlantic, on progressive Twitter, etc., Thus a bunch of philosophers became determined to show that it was wrong from the start. In doing so, because longtermism relies on fairly basic claims, they’ve tangled themselves in knots. Would you believe it, I think I can get us out of the weeds and both save longtermism and save us from the popular image of longtermism.
Here is my diagnosis:
What really scares people about longtermism is sacrificing the interests of the poor and vulnerable alive right now for the sake of the future. It is no accident that the example given in the talk was distributing fewer malaria nets to fund asteroid risk mitigation. Forces that publicly support longtermism (e.g., tech billionaires, etc.) are (rightly) seen as opponents of efforts to ameliorate poverty, etc.
But correctly understood, longtermism means that we should be doing far more for the poor and vulnerable right now, not less.
Concern that longtermism might counsel us to abandon the vulnerable is not entirely unwarranted. A few longtermists have said things along these lines. The quote that always gets bought out here is from Nick Beckstead’s dissertation:
“It now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal,”
At its darkest this becomes Let the poor die, they are at best of no use to advancing towards our cosmic destiny, and at worst a drain on resources.
But longtermism, properly understood, and pursued by someone with humane values, doesn’t say we should do this.
There are a lot of free variables in what the future of humanity looks like even if we survive into the far future. Tiling the universe with simple beings experiencing pleasure is not attractive. We will have to make many ethical choices about what any future utopia looks like- either through explicit choice or by gradually settling into equilibria. Most of us, reading this and concerned that longtermism might require us to abandon the poor, will want that future society to be built on values like love, justice, freedom, and dignity.
But if we want our future to be built on certain virtues, it would be a mistake to sacrifice those values now for the future. Just as economic growth compounds, so do compassion and cruelty. If you endorse cruel values, like letting people die of malaria because they are not useful to creating your beautiful utopia, you are shaping the values of the future for the worse. I admit this is somewhat conjectural, but if we adopt a policy of “throw the poor into the charnel pit and burn the meat for the gods of progress that the future might be lovelier” I do not think it will end well.
The danger here is value lock-in. As we develop greater and greater powers etc. etc. we’ll face choices like:
In what ways should we alter and improve ourselves biologically and/or cybernetically- including in ways that might change both our values and our capacity to embody values? Who should be allowed to make the decisions around alteration?
Should we adopt technologies that make rebellion and resistance to government impossible?
What kinds of superhumanly intelligent beings should we create, and with what values and capabilities?
If we have the wrong values when we face these choices, our descendants could be stuck with those wrong values forever. That means that it is important that we drive society towards humane values in the present. Thus it seems to me that rightly understood longtermism holds that it is urgent that we do more for the poor right now- both politically and charitably, and extol the virtues of doing so.
A future utopia has three requirements:
Technological progress
The right values
Averting existential risks
While 1 & 3 are much discussed, 2 has sometimes been neglected. This is a mistake. Promoting utopia-compatible values is just as much a longtermist project as preparing defenses against engineered superbugs. What is the value of a halo if you can’t rise to its height? (No strong feelings on Tool generally but Wings for Marie 2 is really good).
Now you might think “While in principle a society could fail to be a utopia, despite having the capacity, because it had the wrong values, in practice this is unlikely. People have been getting more humane over time, and this is likely to continue.” If you do think this, I hate to be the one to break it to you, but your confidence may be unwarranted. Not everyone agrees with joyous cosmopolitan freedom and fraternal community for all there has been some progress, but there is no ironclad guarantee it will continue forever. Some quite nasty people haven’t given up, and they may yet win. It’s easy to not realize how many of them there are in a social bubble of broad humanitarianism. Asserting, in word and deed the dignity of all humans right now- creating successful political and charitable projects embodying and inculcating those values is important
Thus, if your ambition is to create a galaxy-spanning empire of joy, wonder, and love, caring for the vulnerable in the present is not of secondary importance but a key part of the project. At present, we are far from the optimal level of both investment in the poor, and the optimal level of investment in existential risks, technological advancement, etc. At some point, we might have to choose a tradeoff function, but this is not a present a concern. This is not to deny the reality of hard choices in particular circumstances, but it is dangerous to build your stairway to heaven with skulls.
I am quite poor, spend many hours a week on this blog, and make it available for free. Your paid subscription and help getting the word out would be greatly appreciated.
I love this bro. This is such a beautiful article man.
Even as a classical utilitiarian, one must not ignore the extremely important instrumental value of love and compassion that massively impacts people's wellbeing or happiness level.
Just while we're quoting Nick - there is a 2022 addendum to his thesis, in which he says the following about the passage in question:
"2022 NOTE: The paragraphs to the right have gotten some attention from people who believe the text implies that some lives are intrinsically more important than others. So I’m making an edit today to clarify that (a) This passage was exploring a particular narrow philosophical consideration, in an academic spirit of considering ideas from unusual angles; (b) I do not believe that lives in rich countries are intrinsically more valuable than lives in poor countries; (c) all things considered, I believe that it is generally best for public health donations to prioritize worse-off countries (and I’ve personally focused significant amounts of my career on promoting such donations, e.g. as a founding board member of https://www.givingwhatwecan.org/). If you quote this part of my dissertation, I would appreciate it if you would also include this footnote to avoid unnecessary misunderstandings."
But the point about value lock-in, and/or "what are our present values as found in our current training data teaching our potential AI children about us" as I sometimes shudder to think of it, remains compelling with or without Nick's addendum.