I don't think there's a conspiracy among AI scientists to trick the public into thinking AI might kill everyone
If they were going to lie, saying that their work might destroy the world would be a bad choice
The scientists who signed
I’ll make this part quick because you’ve probably already read about it.
A lot of scientists -a lot of very distinguished scientists- working in the area of machine learning and AI have come out recently and expressed agreement with the statement that:
There’s an awful lot of gravitas on that list. Granted, there are lots of very senior scientists who did not sign it. In some cases, they probably haven’t gotten round to it, or agree with the point but don’t find it a useful display, or are planning on timing their statement for another juncture. But of course, in other cases, many other cases, they probably disagree with the point, or are unsure, or have no opinion. But what I can say is there are loads of superlatively senior scientists on that list.
I don’t think this cherry-picking. The closest we can come to an objective list of who the biggest scientific players are those who won the 2018 Turing Award for advances in machine learning. The Turing award is sometimes, like the Fields medal in mathematics, compared to the Nobel prize- and while such comparisons are slippery, it is generally regarded as the most prestigious award in computer sciences. Two out of three people who shared that Turing Award “For conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.”: Geoffrey Hinton and Yoshua Bengio have expressed profound fears about Artificial Superintelligence killing everyone.
I don’t think these scientists are cynical
A lot of people have expressed skepticism about this statement, as one would expect given:
There are a lot of people who deny existential risk from AI, and
This statement threatens the position of AI existential risk deniers.
This is because in order to maintain their position, they need to argue not just that existential risk is improbable but that it is almost impossible. If there’s even, say, a 5% chance of us all dying to AI, we should take that very seriously- or a 1% chance for that matter. I think the risks are much larger than that, but for the skeptics to be right- and for this to not be a topic worth worrying about- the risk has to be very low indeed.
A large number of esteemed experts saying something is very possible and worth taking seriously is, prima facie, evidence that it is a real possibility. Thus the deniers need a response to this statement or their position is dead in the water. One possible strategy is to impugn the honesty or sanity of those signing the statement.
As I see it, there are roughly three possible reasons why the signatories agree with the statement:
A) They don’t really.
B) They’re comically delusional- puffed up with their own importance and displaced guilt.
C) This is an honest opinion, possibly correct
I mostly want to refute the first, but I’ll also take a few potshots at the second. So what’s the best case that they don’t actually believe this?
In order to impress everyone with the potential power of their inventions, attract funding for safety research, and engage in regulatory capture by locking less senior players in machine learning out, they have gathered together to stir up disquiet about AI.
Does anyone believe this? Yes. I originally didn’t want to put in example because I dislike calling out individuals for positions they may later regret. But there are many running this line: here’s Sociologist and public intellectual Kieran Healy, a fellow I generally respect quite a bit, arguing it’s all a cash grab. Note that one of his tweets has 4000+ favorites. Note also he’s one of countless people saying this I could have picked.
I don’t buy it.
I don’t buy that a large demographic of scientists, like machine learning researchers, would be so cynical that about half of the most senior researchers in the field would lie about this.
I don’t buy that- if they were this cynical- they’d act through such a dangerous, potentially financially self-defeating strategy- whipping up panic about their own field. They would not only have to be cynical they would also have to be daredevils.
In addition to the financial risk, I don’t buy cynical actors would put their own reputations at risk like this. These are, for the most part, pretty sober people- even if you don’t buy their honesty. The risks to reputation are big.
I don’t buy this level of coordination by such cynical people.
If their aim is regulation, there are plenty of other ways to argue in favor of regulation, if that’s what they want. Discrimination, national security (both Russian bots + spying), privacy, deepfakes, unemployment and other kinds of economic disruption… There’s no shortage to the potential harms of AI.
If their aim is publicity for their field, how much more can they really get? People are already awed with their inventions, is any investor or consumer really going to say “I wasn’t going to invest or purchase access to GPT-5, but now that I think it might destroy the world…” Is this really a big market? A big enough market to take this risk for?
If their aim is to secure grant money for safety research, it doesn’t seem like a prize objective.. Government and philanthropic grants for safety research seem unlikely to be a major game changer for a field already worth billions- especially - against the possible professional risks claiming your work might kill us all represents.
Now, with regards to the more subtle hypothesis B:
I am not denying that social position, material interests, intellectual outlook, guilt (some of it displaced), and complex industry games and motivations that we don’t fully understand are playing a role in their thinking. I’m not saying that this statement is de novo, that it has no history, that it is a pure vision of a possible future untainted by ideology, desire or need. I expect people in any field will have biases related to that field influenced not only by their material interests, but also their intellectual position. To the one holding a hammer, everything looks like a nail, etc. etc.
But that’s always true of science. We all- scientist or not not- live with a healthy degree of self-delusion, and an unhealthy extra portion thrown on top of that. There’s no reason to think these scientists in particular are ideological crackpots. They’re not economists for God’s sake (sorry, couldn’t help myself).
All we are being asked to do is to take the risk seriously- the messenger does not need to be a pure angel of the intellect from Platonic heaven- with no biography, bank account or psychology- for this message to be worth heeding.
Pascal’s wager and its problems is not a refutation
Let’s concede, for the sake of argument, something that most of the people who signed that statement would not concede — that the risk is as low as, say, 1%.. Let me repeat, neither I nor the majority of signatories think the risk is this low.
But even if it is, pretty obviously we should still act given the stakes.
Often when I argue that even if the risk is low, we should take it seriously because the stakes are high, I’m told that this reasoning is bad, because it’s like the reasoning in Pascal’s wager.
First of all, let’s do away with this “longtermist” phrase which used to have a precise meaning, but has now become a derisive buzz word. Even if I, for some bizarre reason, only cared about people alive now, and thought there was a 1% risk of AI doom 20 years from now, it would still be worth my time to work to prevent AI doom.
1/100*8 billion= 80 million lives. That’s still worth worrying about.
Heck, even if the above were true but I believed in a philosophical position I just made up called “ultra-shorttermism” and only cared about people over 40 it would still be rational to take measures to mitigate the risk.
Pascal’s wager works like this. Even if you think there’s only a 0.01% chance the Christian God is real, if he is real you get an infinite reward for believing, and an infinite punishment for disbelieving, so you should believe. The problem is that it appeals to infinities, which breaks standard decision theory.
The problem is not just that it involves a big longshot risk. You can’t use dismissing Pascal’s wager as a fully general argument for dismissing longshot risks. That would be crazy.
“Honey, should we get fire insurance”
“No, far less than 1% of houses burn down a year”
“But we’d be completely bankrupt if our house did burn down”
“Lol #Pascalsmugging #Pascalswager #Copeandseethe”
Summing up
A bunch of very senior scientists in an area said we should take steps to mitigate a risk in that area, so we should. It doesn’t have to be more complicated than that.
This is clearly just the "Bootleggers and Baptists" thing: sure, there are people who sincerely believe the end is nigh, that death comes on swift wings, "… machine in the likeness of a human mind" etc., but taking the Outside View, this is just the setup for regulatory capture of the sort we've seen so many times before.
Didn't Jeffrey Hinton say there is nothing we can do to stop it? What does it mean to take a threat seriously when you can do nothing about it? I wonder about the conclusion as well. Is it not reasonable to evaluate a) the arguments or b) the proposed steps as the best course of action? What they're saying right now *isn't* a scientific prediction. It is very unlike the climate change predictions in that these --while uncertain--are based on scientific evidence such as models. So this is the reason people don't entirely cede to their expertise, since the claims they make are outside the domain of anyone's expertise.