7 Comments
Apr 15, 2023Liked by Philosophy bear

Tedious point: title of this is ambiguous, I initially read race as meaning race, despite having read the Scott piece. Perhaps amend to "arms race dynamics"?

Otherwise, agree but think you understate the case. "However, it’s far from obvious to me that this guy, a Chinese nationalist from a vastly different political and cultural milieu, doesn’t have deeply different aesthetic and ethical values to me, and these values are different enough to substantially reduce the value of the post-scarcity utopia he might create, relative to mine (from my point of view)."

2 points. One, consider what lord Acton said - "Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority:..." our world dominating AI is ex hypothesi going to have power thousands of times more absolute than anyone has yet managed.

Two, cultural and philosophical differences don't seem to matter. Consider the Wannsee Conference of 1942 which decided on the Final Solution to the Jewish Problem. Attendees were exclusively from the same developed European, protestant Christian, post Enlightenment culture that I am (I am British) and that the liberal US is. This did not help and did not give us a shared bedrock of values about whose application we differ at the margins. (Incidentally, the fact of it being a conference is perhaps the most chilling thing about it: a corporate decision made within an ostensibly rational framework, not a lone maniac, and one made in a banal context we are all familiar with, with presumably presentations with slides and coffee breaks and paper and pencils set out for everyone). People are not fundamentally decent guys with a meaty core shared with the rest of humanity - not even people from identical cutural backgrounds. Putin has not yet caused your violent death because you are not in Ukraine - not because he is at heart a lovely chap. And he is not an edge case or outlier whom it is unfair of me to pray in aid. He is just a bad man with power.

Much AI risk thinking, is science fiction. For alignment to be on the cards, you need a set up like 2001. There's I think 5 or 6 NASA guys on the ship, on a mission to save humanity or discover the secret of life or whatever, and 3 or 4 of them are asleep. You can expect them to be hugely aligned with each other and therefore all plausible candidates for HAL to be aligned with. Step out of the spaceship and it all falls apart.

Expand full comment

Thanks, this is a good post. A few thoughts:

1. One point I was trying to make was that post-singularity problems will be weird ones, like hedonic utilitarianism vs. something else, which won't cleave along normal political lines. When people talk about winning a race for the Singularity, they mean that they think Biden would be a better God-Emperor than Xi. But even though I like Biden better in our current situation, I don't know that he's any more qualified to make hedonic-utilitarianism-related choices. Possibly it's better if he respects democracy and we let the American people vote - that probably maintains some level of post-singularity freedom which we can use to opt out of the hedonic utilitarianism, even if that wins.

2. I hope if a conservative won the singularity and banned gender transition forever, they would at least have the decency to cure all gender dysphoria. That seems better than our current world, and neither better nor worse to me than the world where everyone can transition (I realize some trans people may have different preferences). I think there are a lot of things like this where seemingly awful values become fine once you have infinite technology that can eliminate the reasons they were awful in the first place (harm reduction for predation by having animals lose qualia for the last hour of their life, whenever that may be?).

3. Please don't accept a deal where you risk 5% chance of extinction to wrest control of the singularity away from me. Please let me get the singularity, present evidence that you had the option to risk 5% chance of extinction to wrest it away from me but didn't, and I'll give you 5% of the universe (or shift the universe 5% in the direction of your values, or something like that). Possibly I'm thinking about this the wrong way, but I promise I will think about it the right way once I can give myself infinite intelligence and there will still end up being some deal like this which is better than you taking the gamble.

4. Although I don't think greater intelligence will necessarily solve all value conflicts, I think the ability of whoever controls the Singularity to get such high intelligence that they can understand your perspective exactly as well as you can and see all your reasons for supporting it in just as much detail as they can see pre-singularity them's reasons for supporting their old opinion is pretty significant. I think only a sadist or stupid person wouldn't take this opportunity, and I trust entities that have taken it much more than I trust normal pre-singularity entities. I don't know how to balance this against "what if I galaxy-brain myself so hard that I can't maintain normal human reasoning and do something that pre-singularity me would have found abhorrent", but I'll think about this more if it ever comes up.

Expand full comment

One notable point: if you think that moral realism is true—which many smart people do, such that it seems that one should have non-insignificant credence in it—then superintelligences would converge, especially given the 4th point you make.

Expand full comment

Deep disagreement on your button-pushing intuitions on this one, which I suppose is only an illustration of your broader point that value disagreements are common, since I’d otherwise plug us as very similar axiologically and on many other dimensions.

My *basic* intuition is that I want there to be thinking feeling beings with some amount of diversity and autonomy, and I don’t want there to be astronomical suffering (except in tiny little voluntary bits that are seen as a valuable part of lives worth living), and ideally (though this is less central) human culture would “continue.” And anything past that - additional size, diversity, further heights of ecstasy - is well, I hate to say a rounding error, it is literally astronomically important, but NOT WORTH GAMBLING DESTROYING THE ENTIRE UNIVERSE OVER.

(At least, any universe with most of these. Happy clam world or Bronze Age Pervert world or whatever, set them ablaze. But even trad cottagecore world, though lame, is so much better than the void.)

Maybe I’m failing at some basic von Neumann axiom type stuff and if I prefer x over y I should accept some % chance of omnicide in order to get it, but if so, fuck the von Neumann axioms! Don’t commit omnicide!

Expand full comment
author

So there's this position I'm very interested in I think of prioritarianism over the good. The basic idea, in its simplest mathematical form, being that we should be very conservative over preserving the first x units of goodness rather than following a maximise expected goodness rule.

I share that intuition to some degree! I wouldn't gamble 5 quintillion lives for a 51% chance of getting 10 quintillion lives. Evidently though, I don't have that intuition as strongly as you do.

You're right, this is an interesting value difference in its own right!

Expand full comment

I stopped at the trans stuff. If US commentators actually looked outside their political bubble they would find that the main opposition to trans self identification is from the left and feminist movements. This should be obvious from the use of TERFs.

Expand full comment

Utopia, schmootopia. Whoever Scott is, he has one thing right. No matter who in the singleton in a utopia, the world would be reformulated and any persons within it would have to conform to the singleton's. Your friend who favors a utilitarian hedonism sounds very much like he believes Huxley's brave new world to be the ideal. The problem in any utilitarian principles of measuring of pleasures against pain via technological advancement illustrates that for every perceived pleasure there is a pain. The simple pleasure of driving a car is overwhelmed by the toxicity that driving affects personal angst in traffic, pollution in the atmosphere, and an inability to recognize we have become mesmerized to such an extent we cannot imagine existence without a car. The wonders of the internet, especially among the youth has damaged interpersonal social repression to such an extent that the suicide rate is annually increasing exponentially as well as other societal disfunctions of increasing mental depressions and violence. The only utopia that can come from this technological utopia is. as you say, a total elimination of human personality diversity and, as with any utopian concept is one that leads, as you satirically point out, some type of vision by the designers of how they choose for humans to behave.

I began a new substack column this week entitled The UnUtopian Optimist. n the first issue I try to explain exactly what I mean by the title.

https://ken9yvonne.substack.com/p/the-unutopian-optimist?utm_source=substack&utm_medium=email

Expand full comment