This is clearly just the "Bootleggers and Baptists" thing: sure, there are people who sincerely believe the end is nigh, that death comes on swift wings, "… machine in the likeness of a human mind" etc., but taking the Outside View, this is just the setup for regulatory capture of the sort we've seen so many times before.
Didn't Jeffrey Hinton say there is nothing we can do to stop it? What does it mean to take a threat seriously when you can do nothing about it? I wonder about the conclusion as well. Is it not reasonable to evaluate a) the arguments or b) the proposed steps as the best course of action? What they're saying right now *isn't* a scientific prediction. It is very unlike the climate change predictions in that these --while uncertain--are based on scientific evidence such as models. So this is the reason people don't entirely cede to their expertise, since the claims they make are outside the domain of anyone's expertise.
I believe what Hinton said is that there is nothing we can do to stop superintelligence from being created eventually (and according to his beliefs, soon), however I don't recall him saying there is nothing we can do to prevent human extinction or a very bad outcome for humanity. That's a very important distinction. Even if I'm wrong and he did say what you think he said, he is one signatory and opinions on what can or cannot be done vary pretty widely withing the group of people who signed that document. It's important to remember that.
As far as the statement being a scientific "prediction", it depends on what you consider to be "scientific" I guess. It's true it's not based on simulations like climate change predictions since it's impossible to model the world state to a sufficient degree to predict the future accurately. I would say it is based on models in the sense that it is a conclusion that I believe most signatories have arrived at logically by modelling the available evidence, observations about current and past events, similar phenomena in nature and logically consistent deductions to arrive at some probabilities that give them enough confidence to sign that document and believe there is a a sufficiently likely danger there that needs to be addressed. Does that mean they have the same degree of confidence as we have currently in climate models? No, not by far and most people you'd talk to off that list I believe would openly admit that. I would however be wary about labeling the statement as "non-scientific" simply because it doesn't exhibit the same degree of certainty.
I would certainly strongly disagree that they claims they are making are "outside the domain of anyone's expertise", there are many domains of expertise that intersect across these claims.
"Didn't Jeffrey Hinton say there is nothing we can do to stop it?"
Thanks for your comment! I agree. The criterion for trying to alter something is not just the hazard it presents, but the product of that and the odds that an action one might take will materially alter the outcome.
The only thing that matters when it comes to Pascal's Wager is whether the claim is true. Not possible outcomes given its decision theory. We *know* that houses burn down even if it's only less than 1%. We also know what causes them to burn down. We have evidence, statistics, hard data. As such it is a wise investment to get insurance knowing that it's true. On the other hand, if I were to argue that doing a few tik tok dances in sequence (let's say a co-ordoninated group dance or something moderately pointless) would result in the apocalypse, would you take it seriously? What if the greatest tik tok experts weighed and signed a document expressing their worries? Would it convince you then?
If there was such a thing as a great Tik Tok expert **whose expertise was related to the mechanism by which the Tik Tok dance might destroy the world**, sure. To be honest with you, even if it didn't, just the spectacle of hundreds of experts in the sociology of Tik Tok coming together to make the warning seriously would be a big thing on its own.
We also do not have "evidence, statistics, hard data" on nuclear war.
Standard decision theory doesn't demand statistical data on occurrences before planning for them because standard decision theory isn't frequentist, and a good thing too.
My point is that tok tok experts in this scenario don't present any evidence for near extinction level events. As for nuclear weapons, we do have evidence. We have seen it's destructive capabilities with entire nations backing it's production and research. Finally decision theory on it's own is useless unfortunately. Doing decision tree resolutions on false or unknown premises is pointless. Case in point, if the Religious Apologist tweaked the argument to 1000 years of suffering followed by reincarnation, you wouldn't accept it. It is not infinite, so that argument doesn't work.
Look, don't get me wrong I think there are plenty of ways to go about convincing people about the dangers of AI. Pointing to adversarial attack studies would be one. But Pascal's Wager is just an unsound argument.
Your sleight of hand here is conflating two kinds of uncertainty: aleatoric and epistemic. In the case of houses burning down, sure, you can ignore the epistemic uncertainty (because we have "evidence, statistics, hard data") but that's not generally true, and the true lesson of Pascal's wager is not the one he intended: it's that sometimes empiricism doesn't work and you have to actually reason about whether or not a claim is true.
Please be specific about the what I conflated and where. Saying one's capacity to ignore uncertainty is "not generally true" does not mean anything to me in the context of burning houses. As for whether empiricism is the only thing in our toorbox, no one said it was. I feel nothing you said here was relevant to what I said. Either that or you failed to convey your point. Sorry.
I have a hunch where you might be going with this since you brought in daddy philosophy into this. But I'll give you the benefit of doubt that you're just being hard to parse. If you are criticizing a point be specific. Quote what I said and then refute it with a counter. Some gibberish about what the author may or may not have meant will not do. (Note: Evidence is simply a proposition/fact that raises the probability of another proposition/fact being true. Has nothing to do with empiricism.)
Okay, this might be an insurmountable difference: if your definition of "evidence" includes *propositions* and not simply facts, of course the distinction between epistemic and aleatoric uncertainty I was stressing wouldn't make any sense.
Yes it would seem insurmountable given it's a semantic critique on your part. FYI propositionality is a standard criteria for what constitutes evidence in most of philosophy. If it can't be stated as a proposition it usually can't be evidence. Unless the only person you care about is yourself and you're a solipsist. In which case personal experience would also count as evidence. If not, feel free to take a jab at a definition of evidence that doesn't require propositionality. Make sure to put it in the reply so I can verify you've done it.
BTW the source for the definition is so common it's literally the first thing in the IEP on what constitutes evidence. Page is the article on evidence section 1a.
Well, yeah. I don't think there's a 'conspiracy' among nuclear scientists to trick the public into thinking nukes might kill everyone. I believe the term for that is 'risk assessment'. Should be part of the job, no?
Agreed on the first one, though even then "not dangerous" still leaves a lot of room for an unpleasant infection you'd rather avoid.
The second one is just completely wrong. It's not a matter of opinion, you're just wrong. Vaccines are safe (billions of us have taken them without any observable side-effects) and effective (which is why lockdowns are now a thing of the past).
1) Facebook's freedom of speech policies are terrible. But just because you're being silenced doesn't mean you're right.
2) The reason that lockdowns are a thing of the past is that COVID became endemic due to herd immunity, partially induced by infections but also largely by vaccines.
3) Only an idiot thinks other people are idiots just for disagreeing with them.
> Facebook's freedom of speech policies are terrible. But just because you're being silenced doesn't mean you're right.
But it does mean one needs to take the silencing into account when estimating the commonness of the thing being silenced.
> The reason that lockdowns are a thing of the past is that COVID became endemic due to herd immunity, partially induced by infections but also largely by vaccines.
The same people telling you the vaccines were safe and effective™ were also insisting the natural immunity was not a thing. Also, if this is due to vaccines, and the vaccines require continued boosting, why are boosting requirements also being abandoned, even as people are abandoning vaccines?
I don't think these scientists are lying, but they are probably spending too much time at the interface, too much time with computers, feeling the impact of algorithms on their lives, combining this with advances in LLM into a delusional or quasi-psychotic picture, drawing the wrong conclusions from the available data and making false extrapolations. The risk is not AI, the risk is humans and their ethical biases and egos. Not AI. The problem is humans, not AI. Even now, with narrow AI systems, we know that the problem is not the systems themselves, but the people who design them, and especially the people who deploy them.
The issue with Pascal Mugging is that at some level of extremely low probability (well below 1%) the situation is already so improbable that adding extra details does not appear to make it significantly more improbable. So it gives an opportunity for a mugger to exploit the expected utility calculation by stating an utility loss so huge that it overweights the improbability of the claim: linearly increasing the improbability with exponential increases of the penalty.
This argument hinges on the premise that there is reputational risk to fearmongering. There is no evidence for that, and in fact, plenty to the contrary.
The "if there's even a 1% chance, we should take it seriously" argument cuts both ways. It implies that when Mr Scientist says we should take the threat seriously, he might only assign a 1% chance to the risk. If I put only a 1% trust in Mr Scientist's opinion, that gives me a .01% probability overall.
I had... thoughts. One, evolution isn't exactly an existential threat; genius immortal descendants are good even if they aren't made out of meat. Two, the way the superintelligence(s) seem to be currently killing us is by making our lives so interesting and meaningful that global fertility has declined below replacement rate; not so bad. To those who say they don't exist yet... well, not in a box, but in the network(s)?
I was wrong; global fertility rate is still 2.3, above replacement rate; it has halved over the last 50 years, though; roughly the computer age. Fertility rate has fallen below replacement rate in dozens of countries. More here: https://ourworldindata.org/fertility-rate
This is clearly just the "Bootleggers and Baptists" thing: sure, there are people who sincerely believe the end is nigh, that death comes on swift wings, "… machine in the likeness of a human mind" etc., but taking the Outside View, this is just the setup for regulatory capture of the sort we've seen so many times before.
Didn't Jeffrey Hinton say there is nothing we can do to stop it? What does it mean to take a threat seriously when you can do nothing about it? I wonder about the conclusion as well. Is it not reasonable to evaluate a) the arguments or b) the proposed steps as the best course of action? What they're saying right now *isn't* a scientific prediction. It is very unlike the climate change predictions in that these --while uncertain--are based on scientific evidence such as models. So this is the reason people don't entirely cede to their expertise, since the claims they make are outside the domain of anyone's expertise.
I believe what Hinton said is that there is nothing we can do to stop superintelligence from being created eventually (and according to his beliefs, soon), however I don't recall him saying there is nothing we can do to prevent human extinction or a very bad outcome for humanity. That's a very important distinction. Even if I'm wrong and he did say what you think he said, he is one signatory and opinions on what can or cannot be done vary pretty widely withing the group of people who signed that document. It's important to remember that.
As far as the statement being a scientific "prediction", it depends on what you consider to be "scientific" I guess. It's true it's not based on simulations like climate change predictions since it's impossible to model the world state to a sufficient degree to predict the future accurately. I would say it is based on models in the sense that it is a conclusion that I believe most signatories have arrived at logically by modelling the available evidence, observations about current and past events, similar phenomena in nature and logically consistent deductions to arrive at some probabilities that give them enough confidence to sign that document and believe there is a a sufficiently likely danger there that needs to be addressed. Does that mean they have the same degree of confidence as we have currently in climate models? No, not by far and most people you'd talk to off that list I believe would openly admit that. I would however be wary about labeling the statement as "non-scientific" simply because it doesn't exhibit the same degree of certainty.
I would certainly strongly disagree that they claims they are making are "outside the domain of anyone's expertise", there are many domains of expertise that intersect across these claims.
"Didn't Jeffrey Hinton say there is nothing we can do to stop it?"
Thanks for your comment! I agree. The criterion for trying to alter something is not just the hazard it presents, but the product of that and the odds that an action one might take will materially alter the outcome.
The only thing that matters when it comes to Pascal's Wager is whether the claim is true. Not possible outcomes given its decision theory. We *know* that houses burn down even if it's only less than 1%. We also know what causes them to burn down. We have evidence, statistics, hard data. As such it is a wise investment to get insurance knowing that it's true. On the other hand, if I were to argue that doing a few tik tok dances in sequence (let's say a co-ordoninated group dance or something moderately pointless) would result in the apocalypse, would you take it seriously? What if the greatest tik tok experts weighed and signed a document expressing their worries? Would it convince you then?
If there was such a thing as a great Tik Tok expert **whose expertise was related to the mechanism by which the Tik Tok dance might destroy the world**, sure. To be honest with you, even if it didn't, just the spectacle of hundreds of experts in the sociology of Tik Tok coming together to make the warning seriously would be a big thing on its own.
We also do not have "evidence, statistics, hard data" on nuclear war.
Standard decision theory doesn't demand statistical data on occurrences before planning for them because standard decision theory isn't frequentist, and a good thing too.
My point is that tok tok experts in this scenario don't present any evidence for near extinction level events. As for nuclear weapons, we do have evidence. We have seen it's destructive capabilities with entire nations backing it's production and research. Finally decision theory on it's own is useless unfortunately. Doing decision tree resolutions on false or unknown premises is pointless. Case in point, if the Religious Apologist tweaked the argument to 1000 years of suffering followed by reincarnation, you wouldn't accept it. It is not infinite, so that argument doesn't work.
Look, don't get me wrong I think there are plenty of ways to go about convincing people about the dangers of AI. Pointing to adversarial attack studies would be one. But Pascal's Wager is just an unsound argument.
Your sleight of hand here is conflating two kinds of uncertainty: aleatoric and epistemic. In the case of houses burning down, sure, you can ignore the epistemic uncertainty (because we have "evidence, statistics, hard data") but that's not generally true, and the true lesson of Pascal's wager is not the one he intended: it's that sometimes empiricism doesn't work and you have to actually reason about whether or not a claim is true.
Please be specific about the what I conflated and where. Saying one's capacity to ignore uncertainty is "not generally true" does not mean anything to me in the context of burning houses. As for whether empiricism is the only thing in our toorbox, no one said it was. I feel nothing you said here was relevant to what I said. Either that or you failed to convey your point. Sorry.
I have a hunch where you might be going with this since you brought in daddy philosophy into this. But I'll give you the benefit of doubt that you're just being hard to parse. If you are criticizing a point be specific. Quote what I said and then refute it with a counter. Some gibberish about what the author may or may not have meant will not do. (Note: Evidence is simply a proposition/fact that raises the probability of another proposition/fact being true. Has nothing to do with empiricism.)
Okay, this might be an insurmountable difference: if your definition of "evidence" includes *propositions* and not simply facts, of course the distinction between epistemic and aleatoric uncertainty I was stressing wouldn't make any sense.
Yes it would seem insurmountable given it's a semantic critique on your part. FYI propositionality is a standard criteria for what constitutes evidence in most of philosophy. If it can't be stated as a proposition it usually can't be evidence. Unless the only person you care about is yourself and you're a solipsist. In which case personal experience would also count as evidence. If not, feel free to take a jab at a definition of evidence that doesn't require propositionality. Make sure to put it in the reply so I can verify you've done it.
BTW the source for the definition is so common it's literally the first thing in the IEP on what constitutes evidence. Page is the article on evidence section 1a.
Well, yeah. I don't think there's a 'conspiracy' among nuclear scientists to trick the public into thinking nukes might kill everyone. I believe the term for that is 'risk assessment'. Should be part of the job, no?
Well, unfortunately the COVID fiasco showed us that even well credentialed "experts" are capable of being wrong *en mass*.
What exactly did experts get wrong about COVID?
COVID isn't dangerous to the young.
Vaccines are neither safe nor effective.
Agreed on the first one, though even then "not dangerous" still leaves a lot of room for an unpleasant infection you'd rather avoid.
The second one is just completely wrong. It's not a matter of opinion, you're just wrong. Vaccines are safe (billions of us have taken them without any observable side-effects) and effective (which is why lockdowns are now a thing of the past).
> Vaccines are safe (billions of us have taken them without any observable side-effects)
Yes, and facebook will cancel your account if you complain at your side effects.
> effective (which is why lockdowns are now a thing of the past).
The reason lockdowns are a thing of the past is that people got tired of them.
Also, you're an idiot.
1) Facebook's freedom of speech policies are terrible. But just because you're being silenced doesn't mean you're right.
2) The reason that lockdowns are a thing of the past is that COVID became endemic due to herd immunity, partially induced by infections but also largely by vaccines.
3) Only an idiot thinks other people are idiots just for disagreeing with them.
> Facebook's freedom of speech policies are terrible. But just because you're being silenced doesn't mean you're right.
But it does mean one needs to take the silencing into account when estimating the commonness of the thing being silenced.
> The reason that lockdowns are a thing of the past is that COVID became endemic due to herd immunity, partially induced by infections but also largely by vaccines.
The same people telling you the vaccines were safe and effective™ were also insisting the natural immunity was not a thing. Also, if this is due to vaccines, and the vaccines require continued boosting, why are boosting requirements also being abandoned, even as people are abandoning vaccines?
The vast majority of the time experts are right. Do you have any reason to expect any different here?
Why do you think you know that? Usually the consensus amongst credentialed "experts" is *assumed* to be right.
> The vast majority of the time experts are right.
Not recently since our procedure for certifying "experts" is broken.
I don't think these scientists are lying, but they are probably spending too much time at the interface, too much time with computers, feeling the impact of algorithms on their lives, combining this with advances in LLM into a delusional or quasi-psychotic picture, drawing the wrong conclusions from the available data and making false extrapolations. The risk is not AI, the risk is humans and their ethical biases and egos. Not AI. The problem is humans, not AI. Even now, with narrow AI systems, we know that the problem is not the systems themselves, but the people who design them, and especially the people who deploy them.
The issue with Pascal Mugging is that at some level of extremely low probability (well below 1%) the situation is already so improbable that adding extra details does not appear to make it significantly more improbable. So it gives an opportunity for a mugger to exploit the expected utility calculation by stating an utility loss so huge that it overweights the improbability of the claim: linearly increasing the improbability with exponential increases of the penalty.
This doesn't generalize to the risks beyond 1%.
This argument hinges on the premise that there is reputational risk to fearmongering. There is no evidence for that, and in fact, plenty to the contrary.
The "if there's even a 1% chance, we should take it seriously" argument cuts both ways. It implies that when Mr Scientist says we should take the threat seriously, he might only assign a 1% chance to the risk. If I put only a 1% trust in Mr Scientist's opinion, that gives me a .01% probability overall.
But I agree with the rest of your arguments.
I had... thoughts. One, evolution isn't exactly an existential threat; genius immortal descendants are good even if they aren't made out of meat. Two, the way the superintelligence(s) seem to be currently killing us is by making our lives so interesting and meaningful that global fertility has declined below replacement rate; not so bad. To those who say they don't exist yet... well, not in a box, but in the network(s)?
I was wrong; global fertility rate is still 2.3, above replacement rate; it has halved over the last 50 years, though; roughly the computer age. Fertility rate has fallen below replacement rate in dozens of countries. More here: https://ourworldindata.org/fertility-rate
"Genius immortal descendants" are bad if:
(a) they kill us all
(b) they don't share our values (which if (a) happens will automatically be true since we humans don't want to make ourselves extinct)
The process is like evolution but that doesn't mean the outcome will be like evolution; it will be more like an alien invasion.