The end of politics
Warning: high-level speculation about a world wholly unlike our own- geninuely post-human. Galaxy-brained (in the derogatory sense) stuff follows.
The inversion and reinversion of politics: The inverted U of political complexity and egalitarianism
Millions of years ago, your ancestors were monkeys. The monkeys likely had alpha males who exploited both the other males and the females of the band. For the most part, these alphas were stronger than the others. [Sometimes people object here and say, ‘didn’t the concept of dominant males come from flawed studies of wolves? The answer is that alphas are largely a myth among wolves, but they are very real among chimps.]
Then we evolved bigger, more social brains. We evolved brains that could talk. We developed a solution to alphas. One alpha might be stronger than anyone else in the group, but he couldn’t protect himself at all times, or, more to the point, protect himself against an organised group of assassins. Typically, gentler techniques- mockery, etc. would be used first, then in the (hopefully rare) cases these strategies didn’t work, our would-be rulers might have ‘hunting accidents’ or ‘wander off in the woods’. We lived in small hunter-gatherer bands that were part of larger, interacting social complexes, owned little, loved our children [rarer in some senses than you might think in the historical record], and whenever someone got too big for their-as-yet-non-existent-boots, and we couldn’t deal with it any other way, we killed them. Perhaps they were even happy. There are still a few people living lives like this, in marginal places.
Now this is only, to be fair, a rough type. There are hunter-gatherer communities with greater and lesser degrees of power and material inequality. In the main, though, the possibility of coalition formation through language and social brains in the early days made our little bands more egalitarian than chimps.
Organising coalitions against would-be alphas? Call that the first moment of politics.
Then our societies became so enormously large that organising things was costly, and changing social direction was enormously costly. Moreover, there were bits of land that were dense enough in nutritional value that it was worth trying to claim them, and the claim had to be organised, and as many men as possible rallied. Mostly, this coincided with agriculture, but it also happened in places hunter-gatherers had access to enormous bounties- e.g. salmon runs.
These difficulties of organising meant that a structure of organisation became reified. Whether social practices ever felt like a choice is debatable, but they certainly stopped feeling like it after society developed in complexity. People developed new essences, king, marquis, knight, priest….
Exactly how war-like our hunter-gatherer ancestors were is a matter of debate. There are almost no depictions of humans fighting humans in rock art- but there are some. There are some case studies of what appear to be extremely violent groups of hunter-gatherers, but there are ethnological debates. Our best response is that we don’t know, and our best guess is that it varied a lot from place to place.
From the highlands of Papua New Guinea to the endless genocides of the age of colonialism, things got ugly.
This was the second moment of politics. Politics was invented as a way to prevent the rule of alphas by leveraging coalaitional violence, but as society became more complex, politics enabled the rise of a class of ““““alphas”””” so extractive that they would make a Chimp blush.
There was always an idea, however limited, however partially enforced, that ruling elites should serve the interests of the people. Then, sometime around the 1700s, as the social surplus increased, perhaps as organisation in a certain sense became easier, one saw a third political moment, a reversion to the first. The powerful should fear who they rule, and rule should be organised in the interests of the ruled. It took a few hundred years to kick in, and then by the 20th century, we had mass democracy.
But the third moment was very imperfect, and although we were far richer, politics did not protect us from dominators as it had in the Palaeolithic.
Now we are on the verge of a fourth political moment. Either politics is sublated into something far more egalitarian, or far more inegalitarian, than anything we have ever seen before.
Consider three ways that superintelligence might go. On the one hand, it might be widely distributed. If superintelligence is widely distributed, and if people’s access to power remains widely distributed, then politics will change forever, because it will be possible to organise anything, with anyone, at any time. Coordination costs, at least around the big questions, will become meaningless, and institutional inertia all but meaningless.
On the other hand, superintelligence might be controlled by superintelligence itself. In this case, human politics also becomes irrelevant, for obvious reasons.
Finally, it might be controlled by a small cabal of humans, or at the minimum, a single one. Again, human politics becomes irrelevant. Certainly, in its concentration of power- nay, far more so, it will be much like the great agricultural empires, but the means by which the rulers rule would not be political. It would be like a ruler ruling not through politics but through powers directly under his control (a magic wand, say), unmediated by the loyalty or choice of any other human- not even needing his human subjects for anything.
In the creation of something much more intelligent than humanity, all politics, as we have understood it, ends.
Assumptions
It’s worth spelling out some of our assumptions here in more detail, because without these assumptions, people often talk at cross purposes. The most important of these assumptions is what we might call the assumption of the potency of super-intelligence.
Consider the game of chess. In the 1920’s, the brilliant grandmaster Capablanca, perhaps one of history’s greatest ‘natural talents’ with regards to chess, thought that the game was too near to perfection. Many grandmasters agreed with him, or feared he was right.
We now know that this was wholly false. Despite the game appearing close to perfection to human players, it is likely that the best possible computer player could give at least a rook in odds and still beat Magnus Carlsen. What this shows is that we humans are nowhere near the cap of resource usage and possibility in chess. My sense is that just as the world has endlessly more dimensions of possibility, factors, objects, combinations, etc., so our distance from the top in this vastly more complex game is unimaginably larger.
It’s important to be clear here that we’re not talking about a world like ours, but with a robot man called ‘data’ running around. We’re talking about a world in some ways more alien to how we live now than any other human world that has ever existed. The people in this world might be building a Dyson swarm or pretending to render down whole planets into raw materials. Humans may be in control, in the sense of setting the goals, but humans, at the very least, humans not utterly modified, are not running things in a logistical sense
Another assumption that I’ll make, although it’s only essential for a few of these predictions, is that once we have vastly more than human intelligence, it will be very easy to achieve technological goals like modifying the human body in whichever way we wish. Call this assumption technological achievability.
A final assumption is the assumption of broad access. This is perhaps the least plausible of these assumptions- is that independent superintelligences will be possessed by everyone or nearly everyone. Also, this will be under conditions that give appreciable bargaining power. Many of my predictions would still quite possibly happen if power were centralised to a tiny minority, just in a different, far less interesting way. Ain’t no real politics if only 12 people have real power- heck, ain’t no politics if all humans are dead either.
So to spell out the assumptions:
Superintelligence would be enormously potent and would cause the economy to grow manyfold.
Superintelligence would make basically any form of human modification- including uploading, etc.- possible.
We’re talking specifically about a world in which everyone has access to superintelligence and starts with roughly the same amount of personal power- meaning the amount of power they can exercise over others without the cooperative involvement or tacit permission of many others. Right now, elites can’t exercise enormous military power without the tacit consent of many (e.g. their soldiers) - so in a way this amounts to the assumption that things won’t change in this regard.
No more politics as we comprehend it
I’m going to go through a list of features that will be absent from a society in which the three assumptions I give hold. All of them are loosely united by the fact that they form, it seems to me, core types of the apparatus of our politics and political economy.
End of labour: (Unmodified) Human work would soon be made valueless, except perhaps certain kinds of work of sentimental importance to other people. Bostrom has noted that this change is far deeper than is normally presented, talking of deep redundancy. It’s not just that you will no longer be necessary to make iron-work or develop new theorems, it’s that you also won’t be:
The best possible primary caretaker for your kids with respect to their future welfare.
The best shopper even on your own behalf
Etc, etc., the only reason to do anything will be intrinsic fulfilment, and instrumental action will only exist in a few corner cases and where we decide to impose it on ourselves, for fun, or the pursuit of other, ethically valuable goals.
End of many classes of coordination problems: As long observed by economists and political scientists, so much of our world reflects our incapacity to simply negotiate between all involved parties (billions) on the fly. Doubtless, there will still be technical problems around certain forms of coordination, but difficulties coordinating and time costs will no longer require cludges like representation- or, in the case of some problems, just a shrug and acceptance.
End of the state: Suppose we conceive of the state as a social organising function that rules through organising. Of course, you could have something like a state that rules instead through the personal power of the rulers- more on that later. But suppose everyone has access to superintelligence and there are no gross differences in power sufficient to enable rule through personal power. The imposition of a ‘state’ decision that is substantially different from the general will becomes impossible. Consider a deeply unpopular policy choice like the UK's invasion of Iraq in 2004. If the entire population were 1. Constantly aware of everything 2. In constant N-way communication 3. Capable of reorganising their behaviour to restructure the social order at a moment’s notice- how could anything unpopular ever happen? In the absence of very large differences in personal power (e.g. differences arising from controlling a bunch more weaponry than everyone else), anarchism, conceived of as direct control by the public over common affairs, is the only possible outcome.
End of “The political process”- Politics as a process, not politics as a subject matter. What I mean here might be best illustrated by philosophy. Notice how, when political philosophers attempt to talk about the political subject matters, they often do so precisely by suspending the political process, and imagining in its place another process- e.g. leisurely N-party negotiations. Rawls’s timeless deliberators behind a veil of ignorance is a good example here, but so is Hobbes’s rational negotiating parties. Even Plato’s idea of designing the laws of a city via a rational process from the ground up typifies this. Political subject matter needn’t mean politics as a process. Politics as a process is, I think, typified by the following:
Limitations about what is or can be on the agenda.
Limited available (subjective) time for discussion
Limited, asymmetric and unevenly distributed information
Irrational, or at best, heuristic-driven, choices
Likewise, great variation in the rationality and attentiveness of agents
Often (admittedly not always), representation
Without anything like this, I think we can say that the political process would be, in a sense, over. There would still be political subject matter- how to govern our common affairs- but nothing like a traditional political process.
End of representation: If you were a superintelligence capable of communicating indefinitely many ways at once, why would you have anyone represent you or your interests, whether a politician or a lawyer? If you weren’t yourself superintelligent (or at least not of the highest order of superintelligence), but you had access to a loyal superintelligent fiducary who could communicate indefinitely many ways at once, why would you let anyone but that superintelligence represent you? Why would you allow a parliament full of people - or perhaps worse, superintelligences not directly loyal to you- to represent you? You wouldn’t. Either you’d modify yourself into a superintelligence and represent your own interests, or you’d send a superintelligent representative fully aware of your utility function to do so.
End of gender: To the extent people do stick to gender, it will actually look like many queer theorists have reimagined it- as a field of ironic play and historical reference, personal sentimentality, or as a conceptual and aesthetic sex-toy.
End of ‘classic’ reproduction: So much about reproduction would change that it’s hard to know where to begin. No compulsory womb use. No upper or lower limit on the number of human parents per child [let alone their gender(s) if any]. So many biological choices to make (and, ethically, at what point does designing a child to make it grow into a preferred type of adult become an attack on the autonomy of that child?)
End of externalities: Right now, we treat certain sorts of damage to the interests of others as a result of a person’s activities as beneath accounting, even though they are often very cumulatively significant. Frankly, this is often just a result of the accounting difficulties. It would be far too difficult find everyone causing externalities, find every victim, and force compensation between them.
End of all existing economic organisation: It seems to me very unlikely under these conditions that humans would be living under anything resembling capitalism- there is no clear benefit to allowing some humans to acquire far greater material resources than others. Human judgment is not the driving factor- they don’t need it to be incentivised. If- and it’s a big if- a market is needed to organise price signals, the AI can organise it itself, market socialism.
End of propaganda: I do not see how propaganda is possible in a world in which everyone is superintelligent or has access to a loyal superintelligence. Lying would still be possible, of course, but this is not the same thing.
End of death: Without overlapping but terminating generations, nothing in our self-conception works as it does now.
End of technological advance- This one is a little uncertain. Technological advances have been a constant companion to post-agricultural civilisation, and since writing, it has been normal for people to name useful things that were only invented in the recent past. Although we do not know this for a fact, it is reasonable to suspect that a superintelligence, one able to build far greater intelligences than itself and so on and so forth, would soon exhaust the process of useful technological discovery. If this is false, we might proffer a more limited hypothesis, that major technological discoveries would soon be exhausted.
End of Hegelian ‘history.’: You can’t have any sustainable sense of an indefinite forward movement of ideas enacted through events once there is no more technological advance, or built-up tension implicit in ‘outdated’ institutions. It might go on for a while, subjective aeons even, but it has to stop. You could have other things- an endlessly circling Markov chain of conceptual moments-, but you can’t have a progression.
End of commonality: We’re all the same species and roughly the same mental architecture. At some point, no more- or, if this does persist, it will only be through an immense imposition.
End of any necessary limits on personal power: By personal power here, I mean something special- the degree of power you can wield without anyone else’s permission. At present, we all hold, basically, zero personal power. In a post-human world, we could, for example, expand our bodies further than you could walk in your entire life.
Now granted, the participants in a post-human commonwealth might, for example, act to prevent anyone from gaining too much personal power. In fact, they’d likely be very wise to do so. But such restrictions wouldn’t be essential to what they were, they’d be a choice, one that would- presumably- have too be negotiated like all others.
Questionable future of major differences in property rights: Recently, there has been a debate about whether or not property rights would be likely to persist in a post human world. I think everyone agrees that they could end, but whether or not it is overwhelmingly likely that they will is disputed. Suffice to say that in any world in which all humans maintain substantial power, the continuation of existing property rights claims, especially in anyway that will exponentially compound across millennia, would have to be in doubt- why allow it?
The end of politics
What we are talking about is nothing less than the end of politics (Aufhebung if you want to be fancy) in any form we have ever really understood it. Things needn’t go like that, both because all the assumptions I outline are contestable (and some even seem less likely than not to me) and because even granted the assumptions, the outlines I give are just guesses.
I want to talk about something that seems somewhat more certain.
When immediately faced with something that kills human politics, human politics already dies a little death, at least for those people who grasp it. Superintelligence is the black hole event horizon beyond which we can scarcely imagine anything like politics. Politics, pursued rationally by people aware of incoming AI, is massively distorted, like objects around a black hole.
Let us imagine that AGI is, say, 10 years away, and that the problem of controlling AI is not insurmountable. In these circumstances, the overwhelming significance of all politics becomes how it will shape the kind of AGI that comes into being, and its controllers.
Ensuring popular power over government.
Ensuring popular power, specifically over AI.
Preventing the lock-in of anything that might permanently compromise these.
Ensuring the strength of humanistic and compassionate values generally. We do not want major entities controlled by sociopaths.
Ensuring researchers are shaped by these values.
On the other hand, suppose that we are 10 years away from superintelligence, but controlling it is insurmountable, normal politics is, once again off the table. The overwhelming priority becomes stopping superintelligence.
To sum: if superintelligence is really, truly on the table, than politics as we have previously understood it is already dead- most of us just haven’t noticed yet.
Either way, I think the left is the correct vehicle for approaching AI. The cultural dominance of values like compassion and democracy is key. Essentially, I still support the vast, vast majority of left-wing political positions, just for additional reasons to most. A civilisation that does not embody care and freedom will be far less likely to build a utopia.
We’re starting from behind then. Best to run to catch up
.



Just one thing about the way alpha males were managed and prevented from using their superior abilities to take over in first Australian culture. It is said that the lore and law was that the best young men were not allowed to distribute the meat they brought back to the group. Witnesses describe how the successful hunter would give the animal to the proper person who shared it around while everyone admired and praised the modest but proud provider. They controlled and used the alpha male tendencies in people for the benefit of everyone.
Once again, I'm intrigued by your efforts to force what's basically AI doomerism, into a leftist framework. But it's not like square peg into a round hole. It's more like trying to use a refrigerator as a space-heater.