34 Comments

This is clearly just the "Bootleggers and Baptists" thing: sure, there are people who sincerely believe the end is nigh, that death comes on swift wings, "… machine in the likeness of a human mind" etc., but taking the Outside View, this is just the setup for regulatory capture of the sort we've seen so many times before.

Expand full comment

Didn't Jeffrey Hinton say there is nothing we can do to stop it? What does it mean to take a threat seriously when you can do nothing about it? I wonder about the conclusion as well. Is it not reasonable to evaluate a) the arguments or b) the proposed steps as the best course of action? What they're saying right now *isn't* a scientific prediction. It is very unlike the climate change predictions in that these --while uncertain--are based on scientific evidence such as models. So this is the reason people don't entirely cede to their expertise, since the claims they make are outside the domain of anyone's expertise.

Expand full comment

The only thing that matters when it comes to Pascal's Wager is whether the claim is true. Not possible outcomes given its decision theory. We *know* that houses burn down even if it's only less than 1%. We also know what causes them to burn down. We have evidence, statistics, hard data. As such it is a wise investment to get insurance knowing that it's true. On the other hand, if I were to argue that doing a few tik tok dances in sequence (let's say a co-ordoninated group dance or something moderately pointless) would result in the apocalypse, would you take it seriously? What if the greatest tik tok experts weighed and signed a document expressing their worries? Would it convince you then?

Expand full comment

Well, yeah. I don't think there's a 'conspiracy' among nuclear scientists to trick the public into thinking nukes might kill everyone. I believe the term for that is 'risk assessment'. Should be part of the job, no?

Expand full comment

Well, unfortunately the COVID fiasco showed us that even well credentialed "experts" are capable of being wrong *en mass*.

Expand full comment

I don't think these scientists are lying, but they are probably spending too much time at the interface, too much time with computers, feeling the impact of algorithms on their lives, combining this with advances in LLM into a delusional or quasi-psychotic picture, drawing the wrong conclusions from the available data and making false extrapolations. The risk is not AI, the risk is humans and their ethical biases and egos. Not AI. The problem is humans, not AI. Even now, with narrow AI systems, we know that the problem is not the systems themselves, but the people who design them, and especially the people who deploy them.

Expand full comment

The issue with Pascal Mugging is that at some level of extremely low probability (well below 1%) the situation is already so improbable that adding extra details does not appear to make it significantly more improbable. So it gives an opportunity for a mugger to exploit the expected utility calculation by stating an utility loss so huge that it overweights the improbability of the claim: linearly increasing the improbability with exponential increases of the penalty.

This doesn't generalize to the risks beyond 1%.

Expand full comment

This argument hinges on the premise that there is reputational risk to fearmongering. There is no evidence for that, and in fact, plenty to the contrary.

Expand full comment

The "if there's even a 1% chance, we should take it seriously" argument cuts both ways. It implies that when Mr Scientist says we should take the threat seriously, he might only assign a 1% chance to the risk. If I put only a 1% trust in Mr Scientist's opinion, that gives me a .01% probability overall.

But I agree with the rest of your arguments.

Expand full comment

I had... thoughts. One, evolution isn't exactly an existential threat; genius immortal descendants are good even if they aren't made out of meat. Two, the way the superintelligence(s) seem to be currently killing us is by making our lives so interesting and meaningful that global fertility has declined below replacement rate; not so bad. To those who say they don't exist yet... well, not in a box, but in the network(s)?

Expand full comment