8 Comments

To be fair to Bostrom, he wasn’t prominent in 1995.

Expand full comment

I'm not sure I can get behind this, at least not fully. Not only was Bostrom not prominent then, he probably wasn't as focused on ASI in 1995 as he was in 2015 or even 2005 and also had no way of knowing he would be short of a crystal ball.

If you have no idea what you're going to do with your life, then this essay's logic suggests that you have to comport yourself blandly at all times for career safety, and then pick one career and stay in your lane at all costs. A chilling effect of that magnitude, from the youngest of ages, risks steeping entire generations in groupthink in all arenas, not just ones where we would like as many people to be on the same page as possible. It's punishing people for not knowing in advance which way the winds of change will blow.

I think there's a balance between the cruelty of The Internet Never Forgets and the necessity of unearthing (and being able to unearth) rot in our midst like Richard Hanania. You could argue that Bostrom deserved flak for writing racist screeds in 1995, but I'd suggest that unless his views had clearly not evolved (like Hanania's clearly haven't) then maybe don't take the Future of Humanity Institute down with him.

But it also would have been a good idea for them to reply to that dashing youngish postgraduate philosopher, and shame on them for not doing so.

Expand full comment

You've hit on the difference between rationalism and EA. Rationalism is about the pursuit of truth, whereas EA is about being effective. EA is happy to downplay and misrepresent things if they believe it furthers their mission, whereas rationalists avoid that like the plague. (If you read Yudkowsky's definition then rationalism is technically also about "winning" rather than the pursuit of truth, but there are extremely strong prohibitions on "dark arts" strategies that are truth-avoidant. In reality most of them just inherently value truth.)

The risk of dishonesty in service of a noble goal is the same as are generally levied against utilitarianism in other contexts; people are bad at judging the expected outcome of certain actions, and have minds that are hardwired to be biased in their favor, so it becomes easy to justify actions that aren't actually positive expected-utility for humankind.

You can see this happening with AI. Some of the less intellectually honest anti-AI accounts on Twitter have taken to fearmogering about anything they that smells of anti-AI-ness (e.g. "helping people design bioweapons", "threatening jobs"), which can make it harder to address the actual serious risks if those in power focus on the other things.

That said, the main AI risk figures have already adopted your proposed strategy. That's why Bostom posted an apology for an email that I'm sure he knows should have been completely unobjectionable to a sane society, and you won't see Eliezer speaking candidly about transgenderism or HBD, because he knows that telling the truth on those subjects would significantly harm his ability to effect policy change.

Expand full comment

26 years, is that the new record for something from the past dug up and used to try and cancel someone? Definitely feels like it.

Expand full comment