The war for heaven: or why even in utopia, values do not converge and arms race dynamics around AI can be rational
Essentially, I want to argue in this piece that, contra Scott, even in an Arcadian paradise of practically infinite resources, there’s every chance that value differences matter. Heck even if we- contra the known laws of physics, acquire literally infinite resources, value differences will still matter. There is no escaping ethics or political philosophy. In a particularly bleak mode, one could even say the singularity won’t end the culture wars.
We’re going to be talking about some ideas that might seem pretty science-fictiony in this piece. For those who think the singularity is too far out an idea to talk about, it might seem all a bit moot. But for reasons I’ll talk about in more detail in the end, I think the concepts I outline give us a playground to think about important ideas. Specifically, I think they open up a utopian vista in political theory- ultra-ideal political theory. The theory of politics under conditions in which material scarcity- or at least current forms of scarcity- do not exist any more.
A singleton is a hypothetical entity that controls all human affairs. Others have no, or negligible power, to resist it. A singleton is often thought to be a probable outcome of an intelligence explosion. The first God-like superintelligence, so goes the argument, will be able to seize control of everything- either on its own behalf, or on behalf of a human controller. At that point, all other human powers will be, at best, slight considerations. Of course, the real strategic landscape around such matters is notoriously hard to know, and predictions of the future reasoned from first principles have a poor track record. Nevertheless, in this piece, we’ll be entertaining the idea that a singleton is a wholly possible outcome of an intelligence explosion in order to ask “does it matter who the Singleton is?”.
At various places in this piece I’ll talk about risk tradeoffs etc. that involve me becoming the Singleton. This is for illustration purposes of ethical tradeoffs under uncertainty only. I worry that even putting this in print can give the wrong impression- viz, that I am some kind of megalomaniac. To be clear, I have no plans to make a grab for omnipotence, and zero capacity to do so, though I would take infinite power if it were put in front of me, and I find people who say they wouldn’t silly. After all, if I don’t get it, imagine who else might!
Scott thinks it doesn’t matter who the singleton is, so long as it’s a human, and not a really terrible human- a sadist. This seems wrong to me, but I’ll let Scott present his case first. This is from a footnote in his recent piece:
People come up with these crazy stories about “winning races” that don’t matter without a technological singularity - then act like any of their current issues will still matter after a technological singularity. Sorry, no, it will be weirder than that.
Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore. As long as they’re not actively a sadist who wants to hurt people, they can just let people enjoy the technological utopia they’ve created, and implement a few basic rules like “if someone tries to punch someone else, the laws of physics will change so that your hand phases through their body”.
And yeah, that “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar. There are some ideologues and terrible people who don’t, but they seem far away from the cutting edge of AI.
This isn’t to say the future won’t have controversial political issues. Should you be allowed to wirehead yourself so thoroughly that you never want to stop? In what situations should people be allowed to have children? (surely not never, but also surely not creating a shockwave of trillions of children spreading at near-light-speed across the galaxy). Who gets the closest star systems? (there will be enough star systems to go around, but I assume the ones closer to Earth will be higher status) What kind of sims can you voluntarily consent to participate in? I’m okay with these decisions being decided by the usual decision-making methods of the National People’s Congress, the US constitution, or Meta’s corporate charter. At the very least, I don’t think switching from one of these to another is a big enough deal that it should trade off against the chance we survive at all.
I don’t agree. To be clear, I don’t come to this position lightly. I think it’s profoundly tragic that Scott is wrong on this. It is tragic both because of what this tells us about race dynamics in the creation of AI and because of the implications of profound values divergence beyond that.
Partly I disagree with Scott because I fear psychopaths may be more common and also closer to power than Scott admits. Mostly though, I disagree because I think that, even among non-psychopaths and after the abolition of scarcity, value differences matter. Indeed, Scott gestures to this himself, only to dismiss the differences as insufficiently important for reasons that aren’t wholly clear.
If Scott were right about value differences not mattering that much, it would be easier to avoid an AI race, and I could save almost all my concern for avoiding human extinction. Unfortunately, he isn’t. That expands the range of circumstances in which it’s rational to seek to create super-intelligent AI faster than your opponents, even if it risks human extinction. It leaves me wondering what I can do to make it more likely that someone with similar terminal values to me gets control.
The problem of Utopia, and my utopia in very general terms
Let me start by outlining he moral problem of utopia in broad terms. You have a certain amount of matter and energy. You need to:
1. Decide what people will be created, or setup a process by which others will decide which people will be created.
2. Decide whether there will be any restrictions on actions [e.g. if they are computationally expensive, or to take Scott’s example, whether people should be allowed to wirehead themselves to feel nothing but constant bliss.]
3. Decide what direction you are going to try to ‘nudge’ peoples lives in, if any, and what means you consider legitimate to do that.
If you put me in charge of things, utopia would be kind of like the culture novels (which we will talk about later), but with a bit more emphasis on self-improvement. People could do what they liked, at least mostly, but they’d be nudged away from passive activities (and probably forbidden from wireheading) and towards the things I think are good in life- friendship, virtue, learning, pleasure, creativity, and so on. I’d try to create a wide range of interesting people.
Kinds of ethical divergence
Hedonia
I have a friend, Kieran. He’s a little strange, but pretty normal- much less weird than me. Kieran is a hedonic utilitarian. He believes that goodness is equal to total pleasure minus total pain. If Kieran controlled the Singleton, he would tile the universe with hedonium- endless blissed out humans if we’re lucky if we’re unlucky, endless blissed out clams. I’ve debated this at length with him, and I don’t think he’s budging on it.
Suppose the two options were someone like me or someone like Kieran being in charge of the singularity, from both our perspectives it matters immensely whose in charge. We’d both prefer each other’s world to nothing of course, but I’d say from my perspective Kieran’s option is <1% as good as mine. From Kieran’s perspective, my option is <1% as good as his. I adore Kieran, but unfortunately, if it somehow came down to it, this difference between us would be a difference worth fighting over.
Is it possible that Kieran would change his mind about hedonic utilitarianism (or I might change my mind about eudaimonic utilitarianism) if granted unlimited knowledge and capacity for reasoning plus other cognitive goodies? Perhaps, but it seems to me not guaranteed. Even if we did change our minds, it’s not obvious that it would be in the direction of convergence.
Ironically, even many of the people sharing and talking about Scott’s article are, in fact, hedonic utilitarians. I think some hedonic utilitarians haven’t fully thought through what their philosophy implies. I haven’t watched much of Rick and Morty, but in the one episode I saw, there’s a “Rick” (apparently there are multiple) who is made to relieve his most blissful moment again and again. If we’re very very lucky that’s what a hedonic utilitarian universe might look like- that tiled everywhere. If we’re unlucky, it’s happy shrimp reliving their happiest moment again and again.
Maybe all hedonic utilitarians would give up their beliefs if they fully grokked the implications, but I doubt it.
Conservatives: Disgust
“Okay”, you object, “but hedonic utilitarians are a bit odd. It’s unlikely one of them would be the singleton”.
There’s a book series that’s been very influential on the thinking of midwits like myself about utopia, the Culture series by Iain M Banks. Utopia, apparently, consists of exploration, playing games, sex parties, trying on different bodies, bizarre art installations, constructed worlds, odd simulations of absurd events, strange social milleus, unusual drugs, studying, sports beyond your imagination, little social jousts, and competitions, trying to complete bizarre, difficult and self-imposed quests and the achievements at the end of them, falling in love, falling out of love and so on.
Many conservatives, if you describe a world like the Culture- say “well, that sounds nice, but it’s implausible”. A minority though- a growing minority- say it would be disgusting and inherently wrong. Too much weird gender and sex stuff going on, not enough monarchy and social hierarchy. Some of that feeling might rest on factual mistakes or other factors that access to superintelligence would remove, but I see no guarantee all of it does.
I do not buy that these conservatives would set up the same post-singularity world as me or anything like it. I do not buy that they would permit the kinds of personal autonomy I would. From the point of view of my conception of the good and human welfare, that could matter greatly..
Consider there are guys on my Twitter feed- right-wing Twitter celebrities- who think that morality went wrong when it went Abrahamic rather than classical (basically, they mean Nietzschean)- not enough struggle for mastery in my utopia, not enough cleansing conflict. Or alternately, let’s take Matt Walsh, who by conservative intellectual standards is neither moderate nor hardline. His whole life is spent fighting trans people and others. If an oracle told Matt Walsh that queer people weren’t going to lead to social collapse, I don’t think for a second he’d stop fighting them. His position is non-instrumental, something like my opposition to Kieran’s hedonium or wireheading.
Is Matt Walsh purely this way due to factual beliefs, self-contradictions in ethical reasoning etc. that awesome computational power would fix? Maybe, but I’m not willing to bet on it. I think he has a deep ethical-aesthetic revulsion to certain things, from my point of view, because he’s a weak little man, easily disgusted by difference.
If you want to consider a more honorable example, take Thriving Quetzal- a conservative on my feed. TQ is an fine fellow and not a vile cur, and we’re mutuals on Twitter. Nonetheless, we disagree profoundly on issues of values. He certainly wouldn’t, say, kill all transgender or people outside a preferred racial group, and he’s profoundly concerned about the nihilistic venom with which many conservatives talk about trans people and openly cheer on trans suicide. However it’s far from obvious that, as Scott puts it, in TQ’s utopia:
You will be able to change your race, age, gender, species, and state of matter at will.
To be fair, I haven’t put that exact question to him. However, one of TQ’s biggest concerns with modern society is the ethical cornerstone that informed consent, and no direct harm to anyone who didn’t consent, justifies anything. I think he once gave the example of cutting off an arm because you preferred it aesthetically as something that society should not allow- even for totally informed adults. In fairness I agree that consent between informed adults doesn’t justify everything- but I suspect I’d draw the lines very differently to TQ. Richard Hannina- who’s pretty moderate by conservative intellectual standards, finds gender ambiguity disgusting. Now I’ll give Richard credit that I don’t think he’d force us all to enact strict gender roles forever, but I think a lot of conservatives wouldn’t be so liberal. People with values like this are not a negligible portion of the total, and their numbers only increase outside WEIRD societies.
Suppose you took it to extremes. Forced heterosexuality, forced gender roles. Forced simulated homesteading forever. ‘Natural hierarchy’ and the ‘strong’ ruling over the ‘weak’ forevermore. Perhaps even pain and suffering among the weak to confirm their ‘natural’ inferiority. Now an infinity of trad homesteading and gender roles forever and ever is probably better than nothing. In fact, from my point of view, it’s probably even better than Kieran’s utopia. However I’d say it’s, at most, 25% as valuable to me as my utopia. If you offered me a choice of lotteries:
(100% chance Zero HP Lovecraft becomes the Singleton)
Or an alternative:
(50% chance you become the singleton, 50% chance everyone dies)
I’d pick the second lottery (in fact, given it’s Zero HP Lovecraft, everyone dying might actually be preferable).
E/ACC, decadence, etc.
Another way of being conservative is a little bit odd. I associate it with a certain kind of accelerationism. It’s a kind of objection to the decadence of humans or quasi-humans in a post-singularity world. In the utopia I describe, people live lives, but those lives are largely irrelevant to the broader fate of the universe on a macro-scale. No one works. Everything is, in a sense, leisure- even if it is hard effort aimed at self improvement. We are pearls encased in a protective shell of superintelligence The struggle for existence is over- at least for humans. To put it in Marxist terms have left the realm of necessity. But, to put it in Darwinian and game theoretic terms- we aren’t selected out because we’ve permanently secured our existence using our first mover advantage.
There are people who see this end of humanity as intolerable. Now I hope that such people will be fairly rare, but this is far from guaranteed.
Hell
But you don’t have to be a conservative to have profound moral disagreements with me. My mum, who along with my dad, I love more than anyone else in the world, says that she hopes there’s an afterlife because many people deserve to suffer in hell. She thinks it would be appropriate for a not insignificant minority of people- rapists, murderers, pedophiles, people who arae just plain nasty and the like- to be tortured for all eternity. I personally think it would be inappropriate to use Godlike power to punish anyone (exception: obligatory BDSM joke), but well over 50% of the population support punishment for punishment’s sake. While a computer granting them omniscience might change that, it also might not.
Now it could be that, given the scope of the universe, the number of punished individuals from the time of ancient earth would be relatively small, but just like the Christian authors who worried that the fate of the damned would trouble the blissful, I worry that one cannot be a completely fulfilled person in a universe in which you know some people are being tortured for their crimes by your society. Even if you feel subjectively happy, something is missing.
Ecological suffering
There’s this philosopher at my university, Christopher Lean, who I’ve briefly met. Christopher Lean is, I’m sure, although we haven’t talked about it in detail, a center leftist or far leftist, like 90% of professional philosophers. I could be wrong on some of these details, but from what I can tell, Christopher is very opposed to predation abolition- the elimination of predation from the world. He believes there is a natural value to the functioning of ecosystems.
I’ve suggested to him my compromise of creating remote-controlled meat drones for predators to eat instead, orchestrating ‘predation’ in the ecosystem but he’s unconvinced. Maybe that’s an empirical issue for him- I don’t know. It’s definitely not just an empirical issue for some people though. There are plenty of normal people who believe that a web of life- including predation and the suffering that entails, is good, and worth any cost of that suffering.
We’re talking about a lot of suffering here. The ethical significance of this difference is enormous. Some people I respect greatly would suggest it might outdo all the good that would be experienced by the flourishing humans.
Diversity
What about the question of diversity? I don’t mean in the sense of diversity training and officers and the like- although I suppose that is related. Rather, consider two universes. In Universe A, there are 5 quintillion beings living rich, happy and fulfilled lives, but many are remarkably similar to each other. In Universe B there are 4 quintillion beings living rich, happy and fulfilled lives, but they are quite varied and rich, and there aren’t countless people living nearly identical lives. Which universe is better? Certainly, at the extreme, 10 quintillion people all living exactly the same blissful life seems disturbing- like Rick repeating over and over again his greatest moment. But how do we trade off similarity and richness? In setting up a universe of countless simulated flourishing lives, this is going to be a serious moral tradeoff. Though vast, the resources of our light cone are finite after all- thus it turns out that, even in arcadia, scarcity stalks and life is finite.
It’s very easy to imagine even two people with superficially very similar values finding that each other’s utopias only have a fraction of the value of their own, due to different quantities of life/ diversity of life tradeoffs.
Something I haven’t thought of yet
I also wouldn’t underestimate the capacity for controversies we’ve not imagined or have scarcely imagined to crop up. When I was talking about this with a friend, Jamie Roberts, he objected that it’s just all too hard to predict- he gave the analogy of people in the 1950s making predictions about what AI would be like- all of which would likely be wrong. I agree with the point that all this is hard to predict, but I think the difficulty of envisaging the issues that would come up in utopia should if anything, incline us more toward my view. Unanticipated clashes seem to me prima facie more likely than unanticipated convergences.
What about people actually close to power?
Okay, but what about the people Scott lists as possible God kings- Zuckerberg, Xi Jinping, Biden.
Scott writes:
This isn’t to say the future won’t have controversial political issues. Should you be allowed to wirehead yourself so thoroughly that you never want to stop? In what situations should people be allowed to have children? (surely not never, but also surely not creating a shockwave of trillions of children spreading at near-light-speed across the galaxy). Who gets the closest star systems? (there will be enough star systems to go around, but I assume the ones closer to Earth will be higher status) What kind of sims can you voluntarily consent to participate in? I’m okay with these decisions being decided by the usual decision-making methods of the National People’s Congress, the US constitution, or Meta’s corporate charter. At the very least, I don’t think switching from one of these to another is a big enough deal that it should trade off against the chance we survive at all.
Certainly, I would prefer Xi Jinping decided the fate of our light cone to eternal oblivion. However, it’s far from obvious to me that this guy, a Chinese nationalist from a vastly different political and cultural milieu, doesn’t have deeply different aesthetic and ethical values to me, and these values are different enough to substantially reduce the value of the post-scarcity utopia he might create, relative to mine (from my point of view). Certainly, if you offered me the following tradeoff:
“A lottery with a 100% chance Xi Jinping becomes the Singleton OR a lottery with a 95% chance in which you become the singleton and a 5% chance of human extinction”.
I would take the second option. The same is true of Biden and Zuckerburg. I’d almost certainly push it as far down as 70%, and probably father than that.
Take Scott, with whom I am mostly in the same ethical ballpark as, at least (very important caveat here) on fundamental questions of axiology. I’d probably accept a five percent chance gamble of destroying humanity to wrest control from him. I could argue pretty easily that I should be willing to accept even a 50% chance, but I’m a bit of a softie.
I’ve sometimes thought of the question “what would you do with unlimited power” as the ethics exam at the end of time. I can’t rule out that with infinite information, reasoning capacity, imaginative capacity etc., humans, or a sizeable majority thereof would converge on a similar value system. However, it seems just as likely that under ideal conditions we’d fly off, scattershot vectors in the space of possible moralities.
What does it matter? Does it matter at all if these science-fiction futures never eventuate Is this all twaffle and fiffle-foff?
It would be easy to say this is all a load of rubbish. Several loads even. If you don’t believe in the possibility of a singularity, maybe you thought reading this article is a waste of time. Maybe you don’t have any truck with this sci-fi- “AGI” nonsense, and would like your ten minutes back.
But even if a singularity never happens, I don’t think I’ve wasted your ten minutes. I think it’s very valuable to ask the question If each person were made omnipotent, how would they act with that power?
There’s this concept in political philosophy called ideal theory- approaching political philosophy by theorizing the perfect state- a practice as old as Plato if not older. I suppose what I’m suggesting could be seen as an ultra-ideal theory or utopian theory. How to build heaven if you face no political or material constraints? If you take one thing from this blog post, let it be the capacity to think of politics in ultra-ideal terms. I do think thinking in these terms will tend to make a lot of people more politically compassionate, though the rub is, what political compassion looks like can well vary from person to person.
As an aside, it seems to me that political, ethical and aesthetic normativity converge in interesting ways in utopia.
Spend five minutes (time it on a clock) thinking about the following question. Suppose you’re a nigh-omnipotent singleton how exactly would you order things? Would you abolish animal suffering? Would you allow humans to do whatever they liked or would you- place restrictions on wireheading (one of the examples of a possible policy debate Scott gives). Would you try to encourage diversity on the theory that people countless people living essentially the same good life has less value than countless people? What means would be permissable.
Here’s an interesting ethical question for you. Nikolai Fydrov Fordorivich imagined a great project- trying to recreate every person who had ever lived. Now suppose that due to limited energy, you had two options:
Create 200 billion new people living happy joyous lives
Create good approximations of the 100 billion people who have lived and are now dead.
Which would you choose?
This is almost a plausible tradeoff. Trying to approximate specific people would likely be computationally expensive in a way that trying to create arbitrarily happy people would not be. Maybe much more expensive. Should we try to recreate the dead?
If you find the idea of thinking about what utopia would be like too demanding, ask yourself what you personally would do with your existence.
It’s Hume’s world, we’re just living in it
A lot of conflicts arise because of scarcity, and a lot of conflicts arise because of mistakes in reasoning or knowledge of facts. Ultimately though, humans are ‘free’ (not really) to want whatever they like and have, for utterly mysterious reasons, developed fundamentally different values.
Matters of ultimate value, if you are not a moral realist, are essentially matters of taste. It’s said that there can be no arguments in matters of taste- de gustibus non est disputandum and despite my somewhat joking counterclaim that In gustibus tantum disputatio eorum est (only in matters of taste can there be disputation) I tend to agree.
However while we can’t argue about maters of taste, we can fight about them. Arguing about what is can’t solve our disagreements about what ought if our desires are inherently opposed, so we either negotiate our differences or fight them out. This is the Humean condition. As Quine said, the Humean condition is the human condition.
Conclusions
It seems to me that if you think that value gaps of palpable size or more are possible then, tragically, you might be better off racing, depending on how much more likely you think a race makes human extinction.
From my point of view in a perfect utopia, we’d just make me the singleton. From your point of view, from the point of view of instantiating your ethical values, we should just make you the singleton. In a somewhat more perfect world than this one, we’d get together and negotiate a split- maybe every person gets 1 8 billionth of the available energy and matter in our light cone subject to some broad humanitarian restrictions. In the real world though, with trust issues and the like racing can be rational.
Philosophy bear, why on earth would you write this? Even if it’s true, why not keep your mouth shut and join the effort to discourage race dynamics? Well, I don’t think Scott’s argument will hold up unfortunately, and I’d rather we grappled with this stuff soberly and publicly, rather than the importance of these things being shadowy.
I think the likelihood of getting one of the really bad endings (in this entirely hypothetical future which I am well aware might never happen) goes up if it’s shadowy dudes in smoky rooms having these conversations.
But more than that, I think- and maybe I’m optimistic- that a conversation about differences and commonalities in what utopia looks like favors, in the long run, similar terminal values to mine. Maybe I’m just some kind of megalomaniac, but I think this in part because a lot of my thinking about ethics and politics has been shaped by starting with the question “How should we design heaven” and thinking back from there. In particular, I think an inclination that is both Eudaimonistic and broadly utilitarian is the kind of ethics a lot of people would form if they spent more time thinking about making heaven a place on earth. From this point of view, I think a lot of people, not all people but a lot, who hold views like deontic libertarianism, pure hedonic utilitarianism and social conservatism as what they think are terminal values, would relax those stances if they drew a map of the world with utopia in it.
This might seem to contradict my earlier claim that people differ on these matters of terminal values. If thinking about their values from the point of view of utopia would change their values, how terminal are they? But the contradiction is only apparent because there are two groups of people:
True believers who would hold onto these ideas, even if they started their thinking with utopia, and
A bunch of people has more loosely adopted these ideas, because of our far from utopian conditions down here.
But maybe I’m wrong, I dunno. I hope this whole piece isn’t a mistake, but as I alluded earlier, while it’s far too complex to think through specific scenarios about how this could play, my hunch is a democratic discussion and accountability makes some of the very worst outcomes less likely, whereas the cloak and dagger model of AI security makes them more likely.
Tedious point: title of this is ambiguous, I initially read race as meaning race, despite having read the Scott piece. Perhaps amend to "arms race dynamics"?
Otherwise, agree but think you understate the case. "However, it’s far from obvious to me that this guy, a Chinese nationalist from a vastly different political and cultural milieu, doesn’t have deeply different aesthetic and ethical values to me, and these values are different enough to substantially reduce the value of the post-scarcity utopia he might create, relative to mine (from my point of view)."
2 points. One, consider what lord Acton said - "Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority:..." our world dominating AI is ex hypothesi going to have power thousands of times more absolute than anyone has yet managed.
Two, cultural and philosophical differences don't seem to matter. Consider the Wannsee Conference of 1942 which decided on the Final Solution to the Jewish Problem. Attendees were exclusively from the same developed European, protestant Christian, post Enlightenment culture that I am (I am British) and that the liberal US is. This did not help and did not give us a shared bedrock of values about whose application we differ at the margins. (Incidentally, the fact of it being a conference is perhaps the most chilling thing about it: a corporate decision made within an ostensibly rational framework, not a lone maniac, and one made in a banal context we are all familiar with, with presumably presentations with slides and coffee breaks and paper and pencils set out for everyone). People are not fundamentally decent guys with a meaty core shared with the rest of humanity - not even people from identical cutural backgrounds. Putin has not yet caused your violent death because you are not in Ukraine - not because he is at heart a lovely chap. And he is not an edge case or outlier whom it is unfair of me to pray in aid. He is just a bad man with power.
Much AI risk thinking, is science fiction. For alignment to be on the cards, you need a set up like 2001. There's I think 5 or 6 NASA guys on the ship, on a mission to save humanity or discover the secret of life or whatever, and 3 or 4 of them are asleep. You can expect them to be hugely aligned with each other and therefore all plausible candidates for HAL to be aligned with. Step out of the spaceship and it all falls apart.
Thanks, this is a good post. A few thoughts:
1. One point I was trying to make was that post-singularity problems will be weird ones, like hedonic utilitarianism vs. something else, which won't cleave along normal political lines. When people talk about winning a race for the Singularity, they mean that they think Biden would be a better God-Emperor than Xi. But even though I like Biden better in our current situation, I don't know that he's any more qualified to make hedonic-utilitarianism-related choices. Possibly it's better if he respects democracy and we let the American people vote - that probably maintains some level of post-singularity freedom which we can use to opt out of the hedonic utilitarianism, even if that wins.
2. I hope if a conservative won the singularity and banned gender transition forever, they would at least have the decency to cure all gender dysphoria. That seems better than our current world, and neither better nor worse to me than the world where everyone can transition (I realize some trans people may have different preferences). I think there are a lot of things like this where seemingly awful values become fine once you have infinite technology that can eliminate the reasons they were awful in the first place (harm reduction for predation by having animals lose qualia for the last hour of their life, whenever that may be?).
3. Please don't accept a deal where you risk 5% chance of extinction to wrest control of the singularity away from me. Please let me get the singularity, present evidence that you had the option to risk 5% chance of extinction to wrest it away from me but didn't, and I'll give you 5% of the universe (or shift the universe 5% in the direction of your values, or something like that). Possibly I'm thinking about this the wrong way, but I promise I will think about it the right way once I can give myself infinite intelligence and there will still end up being some deal like this which is better than you taking the gamble.
4. Although I don't think greater intelligence will necessarily solve all value conflicts, I think the ability of whoever controls the Singularity to get such high intelligence that they can understand your perspective exactly as well as you can and see all your reasons for supporting it in just as much detail as they can see pre-singularity them's reasons for supporting their old opinion is pretty significant. I think only a sadist or stupid person wouldn't take this opportunity, and I trust entities that have taken it much more than I trust normal pre-singularity entities. I don't know how to balance this against "what if I galaxy-brain myself so hard that I can't maintain normal human reasoning and do something that pre-singularity me would have found abhorrent", but I'll think about this more if it ever comes up.