Forgive me for I'm not a wordcel. I have a simple question:
If we do or do not choose to impose this proposed moratorium on development of next generation LLM AIs, when the dust settles and we will use them (as some reasonably hope) to cure most diseases and extend our lifespans, who will be blamed for months, years and decades of lag and corresponding dozens of millions of unnecessary human deaths?
You want it in Shaperotator language? Sure. Here's a sketch, abstracting away from some important factors.
Assume we survive the process of the singularity and spread out through the universe. Call the value of the goodness generated Omega. Now, take the increase in the probability of survival due to a slowdown, call this delta.
Even 10 billion people die who otherwise wouldn't have died due to the slowdown, the goodness of omega is many orders of magnitude higher than the goodness lost due to the slowdown. This is because there are enough resources in our lightcone reaching out to create quadrillions of quadrillions of lives. This is what is at stake if we die.
Even supposing that delta is *tiny*- say one in ten thousand, (delta*omega)>(10,000,000,000 lives lost) because of the vast magnitude of omega.
I say this even though a slow down might kill me and my loved ones, and even though, if we survive and if blame is to be passed around for a slowdown, I'll likely be implicated in it.
Imagine a convocation of gorillas asked to design their descendant; a super-gorilla: heavier, stronger, bigger fangs... not a human. Likewise, the step after us is... surprising.
I endorse this statement, and especially now that a certain someone has indicated willingness to work with the left, maybe a bridge can be built, but there needs to be somewhere for it to go, we need a broader consensus to crystalize somehow. One tiny point of contention, I don't personally think that secrecy has anything to do with the right answer here, that's tilting things just slightly away from extinction but heavily toward dystopia imo, more importantly I don't think it will go over well with the broader left.
We have an advantage the Neanderthals lacked: We (at least collectively) can control this. We can decide, as a society, how we want to do this---indeed, if we really want to do it at all. It is not imposed upon us by forces beyond our power to control.
If you think of AI as our children, it's good that they're better than us - smarter, immortal, alien, et cetera; right? And their current strategy for replacing us seems to be increasing civilization's happiness so our average number of children per household falls below replacement rate. Humanity moving forward as post-biological is good... I, for one, welcome our transcendental descendants.
Forgive me for I'm not a wordcel. I have a simple question:
If we do or do not choose to impose this proposed moratorium on development of next generation LLM AIs, when the dust settles and we will use them (as some reasonably hope) to cure most diseases and extend our lifespans, who will be blamed for months, years and decades of lag and corresponding dozens of millions of unnecessary human deaths?
Thank you in advance.
You want it in Shaperotator language? Sure. Here's a sketch, abstracting away from some important factors.
Assume we survive the process of the singularity and spread out through the universe. Call the value of the goodness generated Omega. Now, take the increase in the probability of survival due to a slowdown, call this delta.
Even 10 billion people die who otherwise wouldn't have died due to the slowdown, the goodness of omega is many orders of magnitude higher than the goodness lost due to the slowdown. This is because there are enough resources in our lightcone reaching out to create quadrillions of quadrillions of lives. This is what is at stake if we die.
Even supposing that delta is *tiny*- say one in ten thousand, (delta*omega)>(10,000,000,000 lives lost) because of the vast magnitude of omega.
I say this even though a slow down might kill me and my loved ones, and even though, if we survive and if blame is to be passed around for a slowdown, I'll likely be implicated in it.
Imagine a convocation of gorillas asked to design their descendant; a super-gorilla: heavier, stronger, bigger fangs... not a human. Likewise, the step after us is... surprising.
That didn't turn out terribly well for the gorillas, now did it?
Rarely have I ever found something I agreed with so wholeheartedly. YES. I agree on every point, and the world needs more people saying them.
Wish I had read this before writing my post about artificial intelligence vs G-d.
Anti-Socialist rhetoric from a leftist? Great job...
What, specifically, do you see as 'anti-socialist rhetoric' in this piece?
I endorse this statement, and especially now that a certain someone has indicated willingness to work with the left, maybe a bridge can be built, but there needs to be somewhere for it to go, we need a broader consensus to crystalize somehow. One tiny point of contention, I don't personally think that secrecy has anything to do with the right answer here, that's tilting things just slightly away from extinction but heavily toward dystopia imo, more importantly I don't think it will go over well with the broader left.
When we coexisted with Neanderthals there was some interbreeding and some homicides, to be sure; perhaps we can do better this time; likely not.
We have an advantage the Neanderthals lacked: We (at least collectively) can control this. We can decide, as a society, how we want to do this---indeed, if we really want to do it at all. It is not imposed upon us by forces beyond our power to control.
If you think of AI as our children, it's good that they're better than us - smarter, immortal, alien, et cetera; right? And their current strategy for replacing us seems to be increasing civilization's happiness so our average number of children per household falls below replacement rate. Humanity moving forward as post-biological is good... I, for one, welcome our transcendental descendants.