THE FUTURE OF HUMANITY Institute IS NO MORE! An ominous-sounding sentence that could be a lot more ominous with a single deletion
The most popular story seems to be that it was shut down because Nick Bostrom wrote an extremely racist email in the late nineties which came to light in 2023. I have also heard rumors that part of the reason for shutting down the institute was that its denizens didn’t mind their P’s and Q’s in university politics, and were arrogant. This may be unfair or one-sided, I don’t know. Personally, I think they’re shutting down because they never responded to an email in 2023 by a dashing young(ish) postgraduate philosopher proposing a collaboration on the philosophy of mind, large language models and ethical patiency- ah well. I am almost certain the popular story tracing it to the email is wrong, the smart money seems to be that it was broader administrative conflicts that killed the institute. However, Bostrom’s email doubtless didn’t help, and won’t help the AI risk cause going forward.
I don’t think Nick Bostrom should have said what he said because what he said was racist. I’m not going to relitigate that. I want to make a different point. Bostrom, like all AI risk/existential risk bigwigs, should be quiet about everything else. Maybe that’s taking it a bit far, but it would be better practice than the current free-for-all.
I’m a communist, broadly speaking. In the long run, I think that a post-scarcity society either has to be communist, or something clearly worse like a dictatorship or series of dictatorships. I sometimes write communist pieces. If I were a recognized leading expert on any of the following, I would shut up about communism forever, or until my period of service ended:
Nuclear power safety
Climate change
Childrearing
Vaccination
AI safety
Any politically charged, highly consequential and sensitive matter that is nevertheless still a space in which expertise carries weight.
In fact, I would shut up about everything that wasn’t within the Overton window, and I mean, well within the Overton window. I recently signed a mildish open letter by some philosophers on Palestine despite the fact that it has already been used to bar someone from a job in Germany. If I were a world expert on, say, vaccine ethics frequently cited by governments, I might have still signed it. However, I would have thought three times, and it would be on the very outer edge of what I’d be willing to do.
There’s a game and we all understand it. There are VERY SERIOUS PERSONS. Very serious persons are allowed to (carefully) contribute to THE AGENDA and perhaps even reach the notice of SENIOR FIGURES and THE GREY EMINENCES. To some degree, it really does work like this. To some degree, it doesn’t. I don’t think that AI risk concerns will be better served if we play exactly this game by exactly these rules, but pretending the game doesn’t exist won’t help.
I am not, to be clear, endorsing the ‘avoid inessential weirdness’ line. I think that line only serves to make autistic people self-conscious. I am proposing something much more moderate. A handful of the most prominent AI risk people should avoid saying anything too controversial about politics unless it is absolutely necessary to a point they must make in their capacity as AI risk theoreticians and advocates
There’s a selection effect working against this. The people who tend to become public figures have a flare for the outrageous. Fame isn’t easy to find, everyone makes their own path to it, and you usually don’t get there by living quietly. However, becoming famous through among other things, writing Harry Potter Fan Fiction doesn’t mean you should say stuff like: “I am opposed to almost all regulation except AI regulation” or words to that effect.
Outside the directly political, similar points apply to tying your reputational flag to the guy working with one of the riskiest assets (cryptocurrency) in a pretty risky area (exchanges) who was giving off signals of psychological instability.
And yes, I am aware that my headline example is essentially totally undermined by the fact that Nick Bostrom wasn’t an AI risk bigwig in the nineties. However, plenty of people are screwing around NOW.
A final point, and one I’ve made before. One thing I find genuinely astonishing about the handful of AI risk people who have dabbled with HBD bollocks is this: By your own lights, it doesn’t matter much. Purely human history is coming to an end very soon, so why would you bother with human-biodiversity scientific racism, even if it were true?
EDIT: I treat the fact that Nick said this in 1995 a bit flippantly in the essay. The more serious point is that it's still a demonstration of the dangers for a movement propelled by expertise of going outside the Overton window.
To be fair to Bostrom, he wasn’t prominent in 1995.
I'm not sure I can get behind this, at least not fully. Not only was Bostrom not prominent then, he probably wasn't as focused on ASI in 1995 as he was in 2015 or even 2005 and also had no way of knowing he would be short of a crystal ball.
If you have no idea what you're going to do with your life, then this essay's logic suggests that you have to comport yourself blandly at all times for career safety, and then pick one career and stay in your lane at all costs. A chilling effect of that magnitude, from the youngest of ages, risks steeping entire generations in groupthink in all arenas, not just ones where we would like as many people to be on the same page as possible. It's punishing people for not knowing in advance which way the winds of change will blow.
I think there's a balance between the cruelty of The Internet Never Forgets and the necessity of unearthing (and being able to unearth) rot in our midst like Richard Hanania. You could argue that Bostrom deserved flak for writing racist screeds in 1995, but I'd suggest that unless his views had clearly not evolved (like Hanania's clearly haven't) then maybe don't take the Future of Humanity Institute down with him.
But it also would have been a good idea for them to reply to that dashing youngish postgraduate philosopher, and shame on them for not doing so.