6 Comments

This is one of the best essays I've ever read

Expand full comment

Thankyou!

Expand full comment

This was an amazingly thought-provoking essay! I had never before considered that the simulation hypothesis might turn out to be good news--that it might mean that we actually live in a world governed by a benevolent deity.

The idea of our world turning out to be a simulation with a benevolent purpose reminds me of [SPOILER ALERT] Robert Charles Wilson's novel "Darwinia".

In it, what seems to be Earth in an alternate version of the 1920s turns out at the end to be a simulation created to preserve all the information of the cosmos against the heat death of the universe.

Its nigh-omnipotent creators--who aren't even the descendants of humanity--essentially function as benevolent deities, to the point of granting all sapient beings eternal bliss in the real world (preceded by a gentle period of purgatorial moral growth) after their death in the simulation.

Expand full comment

I actually read Darwinia when I was about 13. I wouldn't say I really understood it at the time, but I think it's had a profound, it sometimes unconscious influence on my thought. I probably should have hat tipped it in this essay.

Expand full comment

> Thus, if it turns out that we mistreat simulations in the simulations we create, the likelihood that we are in a simulation in which we are going to be mistreated goes up.

This strikes me as sort of a Calvinist mode of thought — the chosen of the best gods would be virtuous; therefore, go forth and lead virtuous lives, not to earn salvation but to prove that it has already been granted to you.

Or, it’s a more morally-flexible version of Roko’s Basilisk (I think of that one more as an Ouroboros because I personally believe it’s over that threshold of self-defeating arguments). We probably have some things in common with our simulators, so since “to have x in common” is a symmetric relation, we infer our simulators probably have similar values to us. We can’t force our simulators to have different values than they already actually do, but we should still strive to have good ones ourselves — because doing so “proves” the values they already have.

I have no hope of understanding beings so powerful they can simulate us because I already cannot fathom the beings on a much lower tier of power. If I had a budget of around one billion dollars, accountable to nobody, I would make a personal mission of resolving some kind of humanitarian crisis as rapidly as possible; say for example, the ongoing humanitarian crisis in Haiti, which broadly just requires someone to pay for large deliveries of resources and labor, and shouldn’t really anger anybody or risk escalating violent conflicts anywhere. I would be motivated in part by altruism, but also, I’d have to admit, by this being a very easy-looking path to widespread respect. I would be — and am — surprised other billionaires aren’t already in a competition/rivalry to see who can do it first. The “shooting off phallic objects in space” critique might be a little overdone these days, but there have got to be cheaper projects with a higher and more straightforward EV calculation than that, so why aren’t those projects receiving billions of dollars?

Expand full comment

I like the Calvinism analogy. It's been used in relation to evidential decision theory before.

I think becoming a billionaire selects for people who aren't that alturistic, sadly, would be nice if one proved me wrong.

Expand full comment