12 Comments
User's avatar
Anonymous's avatar

Forgive me for I'm not a wordcel. I have a simple question:

If we do or do not choose to impose this proposed moratorium on development of next generation LLM AIs, when the dust settles and we will use them (as some reasonably hope) to cure most diseases and extend our lifespans, who will be blamed for months, years and decades of lag and corresponding dozens of millions of unnecessary human deaths?

Thank you in advance.

Expand full comment
Philosophy bear's avatar

You want it in Shaperotator language? Sure. Here's a sketch, abstracting away from some important factors.

Assume we survive the process of the singularity and spread out through the universe. Call the value of the goodness generated Omega. Now, take the increase in the probability of survival due to a slowdown, call this delta.

Even 10 billion people die who otherwise wouldn't have died due to the slowdown, the goodness of omega is many orders of magnitude higher than the goodness lost due to the slowdown. This is because there are enough resources in our lightcone reaching out to create quadrillions of quadrillions of lives. This is what is at stake if we die.

Even supposing that delta is *tiny*- say one in ten thousand, (delta*omega)>(10,000,000,000 lives lost) because of the vast magnitude of omega.

I say this even though a slow down might kill me and my loved ones, and even though, if we survive and if blame is to be passed around for a slowdown, I'll likely be implicated in it.

Expand full comment
Scott's avatar

Imagine a convocation of gorillas asked to design their descendant; a super-gorilla: heavier, stronger, bigger fangs... not a human. Likewise, the step after us is... surprising.

Expand full comment
Patrick Julius's avatar

That didn't turn out terribly well for the gorillas, now did it?

Expand full comment
Patrick Julius's avatar

Rarely have I ever found something I agreed with so wholeheartedly. YES. I agree on every point, and the world needs more people saying them.

Expand full comment
Isha Yiras Hashem's avatar

Wish I had read this before writing my post about artificial intelligence vs G-d.

Expand full comment
Allegorically Arthur's avatar

Anti-Socialist rhetoric from a leftist? Great job...

Expand full comment
Philosophy bear's avatar

What, specifically, do you see as 'anti-socialist rhetoric' in this piece?

Expand full comment
dualmindblade's avatar

I endorse this statement, and especially now that a certain someone has indicated willingness to work with the left, maybe a bridge can be built, but there needs to be somewhere for it to go, we need a broader consensus to crystalize somehow. One tiny point of contention, I don't personally think that secrecy has anything to do with the right answer here, that's tilting things just slightly away from extinction but heavily toward dystopia imo, more importantly I don't think it will go over well with the broader left.

Expand full comment
Scott's avatar

When we coexisted with Neanderthals there was some interbreeding and some homicides, to be sure; perhaps we can do better this time; likely not.

Expand full comment
Patrick Julius's avatar

We have an advantage the Neanderthals lacked: We (at least collectively) can control this. We can decide, as a society, how we want to do this---indeed, if we really want to do it at all. It is not imposed upon us by forces beyond our power to control.

Expand full comment
Scott's avatar

If you think of AI as our children, it's good that they're better than us - smarter, immortal, alien, et cetera; right? And their current strategy for replacing us seems to be increasing civilization's happiness so our average number of children per household falls below replacement rate. Humanity moving forward as post-biological is good... I, for one, welcome our transcendental descendants.

Expand full comment