6 Comments

Regarding risks existential, bias, and domination, I believe 3 deserves not just more of our attention, but most of it. 1 is just too hard, we'll most likely have to get extremely lucky and end up in a universe where it just isn't that hard to make an aligned superintelligence. Next most likely thing that saves us is coordinating well enough to buy a whole lot of time, followed by incredible theoretical advances in a short amount of time. Neither of these is probably happening but I believe there's a non negligible chance of getting lucky. 2 could be bad but the magnitude of harm is dwarfed by the other two. 3, as pointed out in the essay, includes the possibility of near infinite negative utility, horrors not just unprecedented but beyond understanding. And while not easy it seems much easier to attack than alignment. Considering it's conditional on 1 being a non issue, it might be as easy as making sure the most powerful AIs we create are under some reasonable form of democratic control, or are independent of human control and non awful.

Expand full comment

yo philosophy bear , spd here, love your content and your way of writing, been subbed to u for a quite a while , i recently started a substack , so wud love your suggestions on how to start as beginners and on how to expand my newsletter.

Expand full comment

Minor correction: it's spelled Peter Thiel

Expand full comment