8 Comments

I love this bro. This is such a beautiful article man.

Even as a classical utilitiarian, one must not ignore the extremely important instrumental value of love and compassion that massively impacts people's wellbeing or happiness level.

Expand full comment
Sep 5·edited Sep 5

Just while we're quoting Nick - there is a 2022 addendum to his thesis, in which he says the following about the passage in question:

"2022 NOTE: The paragraphs to the right have gotten some attention from people who believe the text implies that some lives are intrinsically more important than others. So I’m making an edit today to clarify that (a) This passage was exploring a particular narrow philosophical consideration, in an academic spirit of considering ideas from unusual angles; (b) I do not believe that lives in rich countries are intrinsically more valuable than lives in poor countries; (c) all things considered, I believe that it is generally best for public health donations to prioritize worse-off countries (and I’ve personally focused significant amounts of my career on promoting such donations, e.g. as a founding board member of https://www.givingwhatwecan.org/). If you quote this part of my dissertation, I would appreciate it if you would also include this footnote to avoid unnecessary misunderstandings."

But the point about value lock-in, and/or "what are our present values as found in our current training data teaching our potential AI children about us" as I sometimes shudder to think of it, remains compelling with or without Nick's addendum.

Expand full comment

Really good points 👍

Expand full comment

"Longtermism" is about the value of the parameter valuing present vs future benefit and been in a utility function. It has zero to do with the parameter valuing benefits received by low-income vs high income people. That a few billionaires are confused about this should be of little concern to philosophers.

Expand full comment

I've never heard of this philosophy. Is there more reading I could do on it? What would you suggest?

Expand full comment

I love when someone explicates my values better than I could do myself. Incidentally, this is why I'm so wary of AI accelerationists. Why devote so much to summoning a deity when it seems likely to deem you unworthy?

Expand full comment

Are these normative claims about what longtermism should be? Or descriptive claims about what it currently is?

I have limited knowledge of longtermism, so take it with a grain of salt that it seems more like the former than the latter.

Expand full comment
author

Normative claims about what it should be, combined with a sense that most longtermists would tend to agree, or at least so I think.

Expand full comment