13 Comments
User's avatar
Jonathan Kallay's avatar

It seems to me to be a problem of being infected by longtermist philosophy that you have to justify solving these concerns as "sending the best version of ourselves into the light one." How about plain old fashioned liberalism?

Expand full comment
Jonathan Kallay's avatar

Please consider applying for part of this bounty, offered on LessWrong:

https://www.lesswrong.com/posts/iuk3LQvbP7pqQw3Ki/usd500-bounty-for-engagement-on-asymmetric-ai-risk

Expand full comment
John Quiggin's avatar

AFAICT, it is now possible to build an LLM from scratch for a few million dollars. If that's correct, then it is hard to see LLMs as the source of oligarchical power. At least in the information economy, monopoly power comes from platforms, and that power seems to be eroding. Obvious evidence is the fact that this discussion is taking place on Substack, not on Meta or X. And there are plenty of alternatives to Substack, although I have been too lazy to explore them properly.

Expand full comment
Daniel Kokotajlo's avatar

So what? That's like saying "it is possible today to buy a rifle for only a few thousand dollars. Therefore oligarchs will never again have power like they did in the middle ages." You can get better AIs by spending more money.

Expand full comment
John Quiggin's avatar

Are you claiming that the arrival of gunpowder didn’t have profound social consequences? If not that, what are you saying?

Expand full comment
Daniel Kokotajlo's avatar

? No of course not. I'm saying you are wrong about what the social consequences of AGI will be. The fact that you can train a dumb LLM for a few million is basically irrelevant in the grand scheme of things, for the reason I mentioned -- you can train much better AIs for more money.

Expand full comment
Wil's avatar

Not to speak for PrQ, my understanding of the point is that there will be enough AGIs going around (however much they cost), and in competition with each other, that no unified systems of control can be implemented.

Expand full comment
N0st's avatar

Hmm. I definitely agree that inequality is a very important problem now and throughout human history. I agree that broad participation in society in decision making is better than a small group of people making decisions for everyone else (because the tendency will be to make selfish decisions). I like the idea of sortition in general and I find the historical examples of people trying to do it (e.g. late medieval Florence) interesting, though they also illustrate failure modes of this system too.

I guess the part of this whole story that I find somewhat strange and hard to think about is the following. We are talking about a situation where we basically bring to life a whole new ecosystem of intelligent beings which interact with one another in complex ways. It's basically a new Cambrian explosion of intelligent life, but even more radical, because this new ecosystem of life is fundamentally different in its "biology".* In this world that we are supposing, humans go from being 100% of life that is above some threshold of human-like intelligence to <1% of life that is above that threshold (can debate about whether other forms of life have other forms of intelligence, but that's beside my point).

My point is, in this imagined new society where we are the tiniest minority of intelligent life, a possibility where 1% or 0.1% of humans rules all the intelligent beings and decides what they do certainly sounds quite bad. But so does a system where 100% of humans (<1% of intelligent life) controls what all the other intelligent life does. Also, both just sound pretty long term unsustainable, especially when we are supposing that all these other intelligent beings are equally or perhaps vastly more capable than humans at doing human-like things.

What would be the ethical argument for humans controlling what all the other intelligent life does? It seems like whatever the arguments would be would quite resemble all the arguments of the past for elitism. We could argue: (1) we are the oldest/first or we created the AI (we are like its parents) and thus we should get to decide. This argument kind of mirrors Confucian or filial piety type argument for elitism. We could argue that (2) we own the AI, we bought and paid for it etc. This is kind of a modern capitalist argument for elitism. We could argue (3) we are uniquely virtuous in terms of creativity and preferences and wants and desires and whatever, and thus we deserve to make the decisions. This argument kind of mirrors Confucian arguments again, or Renaissance humanist arguments for elitism, etc. You could come up with various other arguments for why we should be in charge, but I think these arguments fundamentally mirror usual arguments for elitism (and I think there are compelling reasons to reject elitism at least in the sense of putting a small minority of people in charge without letting the majority have any stake in the decision making process).

So I guess in this vision of the future, it seems like it isn't really up to us anyway first of all, and even if it were up to us, I think the ethical choice would seem to be to reject control anyway. So then, what is the implication of what we should do? I suppose try to make and interact with AI in a way that is collaborative/harmonious/respectful. Or something. I'm not sure what follows.

Just my two cents.

All this being said, the situation I am describing above is more of a long term equilibrium type argument. I can certainly see the need for UBI and "alignment" in the short term and the importance of political decision making for this process.

__

* Why should we suppose that the AGIs would be (or be like) life forms? Or why, like an ecosystem?

People have certainly debated for centuries what life is, what minds are, what consciousness is. But I'm inclined to think that life involves some process of ongoing maintenance of homeostasis with an environment, and some form of self-replication however convoluted. I think the systems we are supposing will be able maintain homeostasis and engage in some form of (virtual and/or physical) self-replication. Why life form(s) plural? Because it seems likely that some reasonable boundaries can be drawn around various entities that will exist that differ in e.g. the underlying model (if it gets updated/finetuned at intervals based on interactions with the environment), or if it has access to different "afferent" data types, or if the model is just running on a different machine, or if it has control of different "efferent" modalities. It seems like these things would form semi-stable boundaries of identity, so there would be multiple entities (though the lines would be somewhat more blurred than for humans arguably). Why like an ecosystem? Because the AIs would be largely interacting and negotiating amongst themselves, trading sensory data, effecting things for other AIs, etc. in a complex process that to me seems to resemble an ecosystem.

Expand full comment
blank's avatar

This sort of writing completely misunderstands the kind of power "AGI" would effectively wield.

Expand full comment
Daniel Kokotajlo's avatar

I disagree. Care to elaborate?

Expand full comment
blank's avatar

All those people who ask Grok if something is true, they are using AI to manufacture consensus and consent. Because AI has way more time to 'read' all the stupid studies people create and spit out an average. And when AI lies, it is usually random rather than targeted, so people find it more trustworthy than the news.

But the ability of AI to automate white collar work is greatly exaggerated.

Expand full comment
Daniel Kokotajlo's avatar

We aren't talking about current AI systems. We are talking about future AGI, which by definition would be able to automate white collar work.

Expand full comment
blank's avatar

The hypothetical Star Trek AI that can do anything without error is a fantasy. The AGI that is predicted to emerge by 2027 will be much more constrained.

Expand full comment