Discussion about this post

User's avatar
Damian Tatum's avatar

This is a good article, I broadly agree. Two points:

1. On the risks of obsolescence, you perhaps don't go far enough.

Yes, AI threatens to render broad swaths of human economic activity obsolete, and this will be bad for the people affected (which could very well include all of us, and perhaps soon!). But "fully automating the means of economic production" could lead to better or worse outcomes; it's hard to say.

The problem (or at least one additional problem) is that AI is not limited to the economic realm. It will soon--maybe much SOONER--begin to render humans emotionally, socially, and spiritually obsolete. People are already reporting that the newer OpenAI models (now with human-like vocal inflections, a sense of humor and timing, and fully voice-enabled UI), are delightful to children, while also able to provide significant 1-on-1 tutoring services. They can reproduce your face, voice, and mannerisms via video masking. They can emulate (experience?) numinous ecstasy in contemplation of the infinite. I am given to understand they are quickly replacing interactions with friends, while starting to fill romantic roles.

I am worried about AI fully replacing me at my job, because I like having income. I am legitimately shaken at the idea of AI being better at BEING me--better at being a father to my daughter, a companion to my partner, a comrade to my friends--better even at being distraught about being economically replaced by the next iteration of AI. Focusing on economic considerations opens you up to the reasonable counterpoint "AI will take all the jobs and we'll LOVE it." I don't think we'll love being confronted with a legitimately perfected version of ourselves, and relegated to permanently inferior status even by the remarkably narrow criterion "better at being the particular person you've been all your life."

I see no solution to this problem, even in theory, except to give up "wanting to have any moral human worth" as a major life goal. Which seems like essentially erasing myself.

2. I note a sort of disconcerting undertone in your essay, a sort of "never let a good crisis go to waste: maybe NOW we can have our socialist revolution, once AI shakes the proles out of their capitalist consumerist opiumism".

Maybe this is unfair, but it seemed to be a thread running through your essay. If this was a misreading, I apologize.

To the extent that it's true, to be clear: I fully stand beside you in your goal of curtailing or delaying or fully stopping (perhaps even reversing) AI development, and I think the bigger tent the better, even if we disagree about exactly what AI future we're most worried about or what the best non-AI human future looks like.

But I feel obligated to at least raise the question: If we had a guarantee that AI economic gains would NOT be hoarded by a feudal few, that AI would indeed usher in a socialist paradise of economic abundance for all (or near enough), would you switch sides to the pro-acceleration camp?

For the reasons outlined in [1] above, I think that would be extraordinarily dangerous, and I would like to understand your deepest motives on this question, since it seems to me that any step down the path of AGI will inevitably lead to dramatic and irreversible changes, most of them probably quite bad.

Expand full comment
Daniel Kokotajlo's avatar

"Meanwhile, the people interested in AI risk have neglected the second problem. This is partly because they’ve trained themselves to think involvement in politics, especially mass politics, means losing intellectual credibility."

I am a card-carrying member of the 'people interested in AI risk' camp, and I agree with this statement. Great post. Not sure what is to be done but I agree it's important to talk about *both* loss-of-control risk and concentration-of-power risk.

Expand full comment
45 more comments...

No posts