I wanted to chuck out for your consideration, some ideas about the future.
No coordination problems
Consider Wikipedia’s summary of the Coase theorem:
In law and economics, the Coase theorem describes the economic efficiency of an economic allocation or outcome in the presence of externalities. The theorem states that if trade in an externality is possible and there are sufficiently low transaction costs, bargaining will lead to a Pareto efficient outcome regardless of the initial allocation of property. In practice, obstacles to bargaining or poorly defined property rights can prevent Coasean bargaining. This 'theorem' is commonly attributed to Nobel Prize laureate Ronald Coase.
Essentially what it shows is that if there are no barriers or costs to bargaining then externalities (say, pollution or smog) do not prevent economic efficiency. If your smog harms me and a group of other people, we can get together and pay you to reduce your polluting. Of course, there might still be equity problems with this arrangement, but that’s another story.
The Coase Theorem gives us a new way to look at what the government does in combatting externalities- it avoids a set of very pricey transaction costs (or the alternative, just suffering the damage of the externalities). This is applicable to almost everything the government doe. Government is a mechanism for doing things that would otherwise have prohibitively high transaction costs involving many-party negotiations. Pretty much the only thing it does outside this framework is redistributing wealth and income.
An interesting fact about a world of superintelligences is that it is also possibly a world in which coordination costs go toward zero. This could enable all sorts of utopias- from the anarcho-capitalist world of Coase-bargaining over every emission of smog rather than regulating it, to an anarcho-communist world with no standing government, but only continuous, shifting, and direct coordination of production, justice, security, and more without the need for mediating institutions.
A world without significant coordination costs is almost beyond our comprehension, but what little we can see or imagine of it tantalizes and shocks us.
The inversion of politics
Years ago, I wrote an essay on the inversion of politics. Small, non-sedentary hunter-gatherer communities are much more egalitarian than comparably sized chimpanzee troops. Why? It’s partly due to the invention of politics. Politics allows coalition formation, which allows for groups of weaker individuals to protect themselves from exploitation by the strong. A powerful chimp’s capacity to overwhelm any single individual in its troop has its analogy among our kind but matters less to us precisely because we are political animals, and can coordinate numbers against would-be bullies. Politics started out as the crowning, egalitarian achievement of our social intelligence.
Only, at some point in our development, politics became inverted from this early use. It was turned from an invention that allowed us to live as relatively egalitarian apes to an invention that allows individuals and small groups to exercise disproportionate power. The coordination mechanisms of politics which gave counterpower to the weak were turned into mechanisms by which leaders organize coalitions and institutions that grant them rule over millions.
Why did it happen like this? Because at small scales, the transaction costs of democratic counterpower are affordable. As societies get bigger, the costs and conceptual technology needed to run democracy become larger. Democracy is costly, pre-modern democratic states A) had such limited franchises they were more like oligarchies B) were often city-states- (despite partial exceptions?). Even modern democracies are limited in their effectiveness, precisely because of the difficulties in coordinating action and keeping everyone informed, difficulties easily exploited by those who do not want action coordinated, or for that matter everyone informed
But superintelligence might end these transaction costs and give the capacity for all of us to understand what is going on perfectly in every way that matters and communicate effortlessly. This, in turn might grant us the ability to coordinate instantly to form authoritative majorities. Even if that’s too idealized a notion of the capacities granted by superintelligence, it might still take us close enough to that ideal to make a kind of liquified hyper-democracy.
One could criticize my argument as follows: “wait, didn’t people promise the internet would do this? What happened there”? There’s an important difference. Yes, the internet gave us unlimited access to information, but it did not give us the capacity to process it or rationally evaluate it in the way networked superintelligences might be able to.
Potentially superintelligence could provide the conditions for what I call hyper-democracy. A form of government based on negotiation of major decisions between all affected parties- or at least all those willing to join the negotiations in good faith and respect ground rules like an equal weighting of everyone’s needs.
We live in a world of mediating political institutions. We can contemplate overthrowing them, but they are seen as having a solidity, a reality, that means that we really would have to overthrow or utterly subvert them in order to change things fundamentally. Yet all these institutions really amount to is patterns of human behavior (including patterns of behavior by people with guns). Political institutions are really an abstraction- a real abstraction nonetheless, but only an abstraction- from patterns of behavior that we are effectively ‘stuck’ in (to use Graeber’s metaphor). Sometimes we have moments in which we glimpse the absurdity of the whole- remember that saying from the sixties what if they held a war and nobody came- a dream of ending injustice by enough ceasing to engage in the behaviors that reproduce it.
For superintelligences, such abstractions as states might be meaningless. There are only vast, coordinating swarms, planning out their mutual activity in detail through negotiation. No strategy, just endless tactics. No laws, just an endless forum. No reification, political or economic, just activity and possibility. A state of social being that is not so much a form of government (or its absence), as the total transcendence of government.
Why not predation abolition eventually?
I’ve seen a lot of people on Twitter blistering in fury about the concept of predation abolition. For those who don’t know, predation abolition is the idea that we should end the killing of animals by other animals (and occasionally plants) for food. Presumably this includes related practices, like parasitism. My two cents on this is that everyone is getting mad because they imagine the end of predators, many of whom are magnificent animals, or their modification to herbivores. A vast sea of ecological sameness.
But predation abolition doesn’t necessarily mean predator abolition or hunting abolition. We can have predator animals without prey animals.
Imagine vastly advanced technology, able to monitor every single creature with a significant brain on the planet, able to carefully and subtly keep predators away from prey, but feed predators by strategically dropping ‘meat golems’ - drones made of flesh- to be pursued by them, looking like their normal prey, but remote controlled with no brain. No sparrow needs to feed a hawk.
I know it sounds ridiculous. However, if you take our current abilities and you really extrapolate them out- if you imagine a future which is not merely our world but with chrome-sheen, it could be feasible. Now of course we can’t know that we’ll ever reach this level of technology. We can’t even know whether it’s probable or not. Hell, we can’t even know if it’s possible. Nonetheless, I don’t think we can rule out the possibility of reaching this level of technology. If we do reach this level of technology, we’ll need to think about predation abolition. Even though we don’t know whether this kind of predation abolition will ever be possible, it’s still very interesting as a way of clarifying what we believe- finding it in the limit.
You’ve got to think bigger about this stuff. If there’s one underlying message to this essay it’s that humans have shied away from considering the implications of godlike computational power and other qualitative transformations of life. Instead we’ve imagined futures based on FTL travel that will probably never come while neglecting the cosmic power we’re accelerating towards. There are so few stories set in post-scarcity worlds, and even fewer stories set in worlds with genuinely superhuman intelligence- intelligence that exceeds us in every single way. Where these things do exist, the authors make excuses to explain why they are inactive, or not central to the plot. We’ll come back to this point about the limitations of science fiction later.
Once we start thinking seriously about the things we could do with perfected technology, we run into questions like “to keep or abolish predation” very quickly, and I think it’s worth thinking about, even if we’re never going to get there, as an exercise to clarify our ethical beliefs.
Another objection I see to predation abolition is that it involves “imposing our values” on animals. Animals don’t really have ethical values (with some possible rare exceptions). No one has values except humanity. Nor is there any external ethical truth to us that we must respect, no teleos against which we might say “perhaps we are wrong and the animals are right”. Potentially, we will get to decide the fate of all things, and while that’s a scary prospect, there is no exterior perspective from which to say we are wrong. We dislike suffering and death, and there is no higher standpoint from which to say ‘actually, suffering and death are good’.
Another argument I see against predation abolition is that we’d fuck it up. The answer of course is that we’re not going to do it, rather if it happens it will be done via our machines of loving grace- hopefully at our command, or at the command of whatever transhuman species we replace ourselves with. We can fuck up something as simple as damaging a continent via Cane Toad introduction. There’s no reason to think future superhumans will be the same.
Dystopia is back, baby. Reflections on how it could go bad
Chapo and cancelled future
Generally speaking, the left is in a kind of quantum supposition between the claims that AI is useless and very dangerous. Strictly speaking, these claims might not be contradictory, but there sure is tension between them.
Leftists have a variety of different reasons for thinking AI is going nowhere. Honestly, I think many of these reasons are not bad reasons, they just happen to be wrong. For example:
AI is made by tech bros. The tech bros are massive bullshiters, they keep making and boosting stuff that goes nowhere like the dotcom bubble, NFTs and cryptocurrency, why think this is any different?
We live in an age of technological decline and slowdown, accompanying political stagnation. Why would this reverse itself?
Chapo-trap house in particular has been running a little mini-campaign against AI hype, pointing to these considerations.
They aren’t exactly wrong about AI. I do think there are going to be a lot of flops, a lot of disappointed investors, a lot of people who get left dry either because they underestimate the regulatory barriers to AI or the technical barriers, or because they confuse cool with a value proposition. This is not even to mention the workers sacked when companies for a (false) impression that they can now work without them. I think that 2 to 3 years from now, there’s a very good chance that the narrative will be that AI has ‘flopped’.
But that will be a false remission. We’ve cracked something very fundamental with deep learning. We’ve made computers that can work with truly complex and ill defined concepts-, and the progress will only keep going. There are serious discussions that need to happen around deep learning like ‘should we be calling for restrictions on further research’. Those discussions cannot happen if we don’t understand AI. They also can’t happen if we’ve convinced ourselves nothing will fundamentally change as a result of this technology. The world doesn’t always work out in terms of neat political and technical decline and fall narratives, or stories about hopeless techbros.
Should we seek to influence the development of AI, or should we try to slow it down and block it? We need to know, yet that’s the trillion dollar question we can’t work on until we realize AI isn’t just a toy for scaring journalists.
Four laborers and the rewards they receive
“Indeed the wages of the laborers who mowed your fields, which you kept back by fraud, cry out; and the cries of the reapers have reached the ears of the Lord of Sabaoth.”
One disaster scenario I worry about is one in which we abolish the need for labor, but not the institutions that require us to labor.
Consider four people, sitting around after machines have automated all of the jobs.
A- was a writer or an artist, or maybe a manual laborer, and demonstrations of their work were used to train AI.
B- was a worker before the coming of AI, and helped maintain society, and the material base that was necessary to enable AI research.
C- was a machine learning researcher.
D-owned the capital that was used to make artificial intelligences
If we just project out existing laws and norms, whose contribution gets credited in a world of AI-surplus? Who is legally entitled to a share of the abundance created? Not A, B or C, only D. That is, from the point of view of positive law, we could build an world of abundance without the need for labour, and the only people who would be entitled to (material) credit for that would be the capital owners. A world of abundance, in which most of us have no right to that abundance must not be allowed, because the abolition of labour without the abolition of capital is the abolition of humanity, or the vast majority thereof, as a political power.
Step back for a moment and take ‘the view from mars’. What we’ve got to do here should be simple. It’s just obvious that A, B and C’s contributions to the foundations of utopia are at least as great as D’s, and a system that doesn’t recognize that is a system gone awry. We need to deal with this ideology now though, because if we don’t, if we leave this ideology in place till after the AI revolution, this ideology might lock in its power with AI workers and guards.
They really hate their seneschals
What follows is really just a hunch. At the moment, I think a certain kind of educated person with a professional-class job is probably way too complacent about the next few years and what AI will bring. I think this complacency is partly based on the idea that there always needs to be a layer of intellectuals beneath capital owners- lawyers, doctors, psychologists, engineers, programmers, academics, and so on. There’s a perception from these types that rich people couldn’t live without them for all sorts of reasons. My guess though is that while they’ll always be a place for courtiers to the wealthy, they’d shed 99% of the professional class in a heartbeat.
The thing is, I really think that capital owners fucking hate you types. They hate their court scholars, who they view as unreliable, scheming and self-superior [that professionals have good reasons to feel superior to these ownership class parasites doesn’t change that]. They hate that you have your own little ambitions to have an independent influence on history. They hate that you’d find the court scholar’s description I gave a little trite. I reckon these people really see themselves in aristocratic terms
So when AI allows for “shedding” of the professional class, I think it will be taken up.
It’s interesting to look at the divergence between the views of the very wealthiest people in society, and the views of professionals. Professionals care, even if in a tragicomically misguided, idealist, way, about social justice. The very richest, on the other hand, appear to be more likely to vote Republican than just about any other class demographic.
Reflections on mind reading and (working) lie detection
One topic I’m always interested in is possible futures which science fiction has underprepared us for. My theory is that we don’t spend enough time thinking about technologies that are hard to depict in fiction. In some ways this follows from a general idea I have- that we have a blind spot for things and events that are hard to fit in narratives.
However I think our blind spot is particularly acute when it comes to considering future societies, technologies and their interactions. We are so reliant on science fiction for our popular image of the future. Science fiction though, especially popular science fiction, is limited in the imaginative possibilities it can explore to those that make good stories.
One example is super intelligent AI- that is AI much smarter than humans in every respect. For a variety of reasons it’s hard to have a genuinely super intelligent AI in science fiction. For one thing, it’s hard to write a character vastly smarter than yourself. Even if you can write it, it creates narrative issues by making many types of problems too easy to solve.
Superintelligence makes a poor story if not handled with great skill, and thus it’s not as present as one might like in speculative fiction. Instead we see a lot of computers that are smarter than humans in some regard but lack “creativity” or “initiative” or “flexibility” or “empathy”. Such artificial intelligences merely act as intellectual auxiliaries to the human characters. Sometimes we see books in which there is superintelligence, but for reasons of its own, it avoids becoming too involved in the action.
Of course, there are many exceptions, for example the Culture Novels by Ian Banks, but active and genuine superintelligence is surprisingly rare in science fiction, especially very popular science fiction. All too often technology is portrayed just as doing what existing technology can do but more so. Flying cars, spaceships that can move faster than the speed of light- all this is just us, but more so. We ignore the possibilities of qualitative transformation.
Superintelligence certainly isn’t the only technology that science fiction has underprepared us for. Another is genuine and widely available lie detectors. I think we’re really sleeping on deep forms mindreading, which some think could become possible in the next few decades. If AI progress drastically slows down that’s my pick [and this is just a hunch] for the first technology that will utterly break and remake society.
Despite it’s plausibility, it’s relatively rare to see working lie detectors in fiction. Where they exist in fiction, they are typically either rare, or are subject to easy work arounds that it’s unlikely would work in real, like saying something something technically true but deeply misleading (this seems to work on Bene Gessrit Truth witches, for example). Why are working and widely available lie detectors rare in fiction? Because they take tension out of the plot!
Non-invasive mind reading- which faces far fewer regulatory barriers than projects like Neuralink, has already produced impressive results, in reconstructing what is before the mind’s eye and ear. I’m not qualified to make predictions in this area, but there has been some surprising progress. We’ve developed some capabilities to reconstruct images, thoughts words and the like although there is debate over the interpretation of these results and how impressive they are. There’s not (yet) any strong evidence of the capacity to reliably tell truth from lies- I don’t think anyone knows exactly how much more difficult it is to tell that someone is lying than to tell that they are thinking of rabbit- so we’ll have to wait and see where the technology goes. There is also the possibility of “triangulating” based on a number of different measures- e.g. a machine learning model that takes both FMRI input and behavioral input.
The fairest assessment is that we don’t know when working neural lie detection will be possible (I do think, in the very long run, it’s surely a matter of when, not if, though of course I can’t know this). We do know that there are people scrambling to make it possible though. As thought-input devices trickle from theory to reality, we gain an incentive to become better at guessing what people are thinking at scale. There would certainly be vast profit to be made in the creation of a working lie detector.
Now I don’t know if working lie detectors would be a net social benefit. No one does. I’d hazard few people would even have a firm opinion, and I’d hazard those few are probably fools. Despite this uncertainty, right now people are working on lie detectors. In no sense is this a democratic matter either- it just happens. Isn’t that kind of weird? So far an assumption that technological development must almost always be permissible has served us well, but I wonder if it hasn’t outlived its usefulness.
Here are some things working lie detectors would change:
>An absolutely central feature of political life these days is the assumption that the person you’re arguing with is lying to you. Political discussion with enforced sincerity and both parties knowing the other party is telling the truth could lead to a positive revolution in consciousness. Alternatively, it could make polarization much worse as people realize the person they’re talking too actually believes that shit and isn’t just trolling or posturing.
> Deeply sincere people might have their power increased. We can imagine something like “the truth telling hour with X” where an interesting person sits, hooked up to a truth machine, and talks about their views and their experiences.
>Depending on the way the technology works, some people might be disabused of illusions about their own beliefs- made to confront what they really think. They may find this very hard to take.
> Economic transaction and enforcement costs would plummet. Due diligence and contract negotiations could become much easier.
>Employer/employee relations might shift, sadly, to favor employers. One can imagine questions like “did you work your whole shift” being used as sticks to extract maximum labor. I suppose it is possible that honest negotiations between employers and employees might help both sides, but if the labor market remains structured as it is, I suspect only one side would be hooked up to the truth machine during the hiring process.
>Of course, all this could be mediated by regulations restricting when and where these machines could be used. These could vary greatly between countries, or be relatively homogenous, or anything in between.
> When people describe horrible things that have happened to them, their listeners tell all sorts of stories to try to blunt the effects of hearing that story “she’s lying”, “he’s exaggerating”, “they’re being selective to make a point”. If people had to face up to others experiences as they really were- without stories such as these to protect themselves- that might lead to a revolution in empathy. Alternatively, it might make people callous their hearts and “toughen up” to the pain of others.
>There would be few false convictions and a much higher conviction rate for criminals (even if criminals have a first amendment right to refuse the truth machine, their witnesses testimony will be more readily believed). The flow on effects for how people view the criminal justice system is hard to know. On one hand, people might feel safer and more certain wrongdoers would be punished. This could lead to a more merciful justice system. On the other hand, with uncertainty banished from the criminal justice process, the vengeful might feel like one of the few important reasons to “spare the rod” was gone, and they could indulge their vengeful desires without concern that the convicted might be innocent.
>It is also possible that as people told their stories, and explained how they became the people they were, and as we started to realize how common skeletons in closets are, we might be driven towards greater mercy.
>Public scandals and controversies would have a wholly different character. Graveyards full of skeletons would come tumbling out of a lot of closets. We might become more furious than ever at celebrities more jaded than ever, or both at once.
>These technologies would allow greater surveillance, yes, but also greater sousveillance- the monitoring of the rulers by the ruled. Questioning politicians, hooked up to truth-telling devices about their innermost beliefs and their sincerity during election campaigns might become commonplace.
>There would be a significant spat of relationship breakups as people are forced to admit they had affairs.
>Abusive partners and parents might utilize the technology to control their spouses or children.
>My great hope is there would be a new kind of intimacy. The possibility of connecting directly with someone’s thoughts would bring people together in a way they never had been before. The possibility of dishonesty is like a membrane that had stopped us ever touching minds before- however thin that membrane became, it was still there, even if unconsciously. Directly sharing innermost thoughts without the possibility of deceit could connect us like never before.
My guess about the timeline of the next few years with AI
These are some pretty bold and specific predictions, so don’t hold it too hard against me if they don’t happen.
For the next 6 to 12 months, people will be very impressed by AI.
Then the narrative flips round to “Actually, while AI has done some interesting stuff, we were naïve to be so impressed by it, fundamentally it doesn’t change all that much.”
Then sometime in the next 2.5 to 5 years it flips back round to “Oh shit, no it really is very good actually, this is kind of scary”.
The first phase represents people’s responses as they see new capabilities (e.g. through ChatGPT and DALLE-2)
The second phase is the period of time after people have gotten use to what it can do, but before widespread economic deployment.
The third phase represents deployment, once it starts affecting us through the job market (possibly through job losses, but also possibly through a shifting menu of avaliable jobs).
RightSideofHistory.Org
I have another proposal for a website. I come up with them periodically. A website called Right Side of History. Consider some issue where we know which way the wind is blowing, like global warming. The premise of Right Side of History would be keeping a record of prominent public figures who engage in global warming denialism, and the forms their denialism takes along with the timing of any recantations of retractions they make. The purpose of this would be to make sure that public figures know “the eyes of history” are on them with regards to this issue (hopefully) making some reconsider their denialism. However, it will also keep the future informed about the colossal mistakes these people made, and the ways in which they contributed to the very same predicaments they will doubtless eagerly claim to have solutions for in the future.
Of course such tracking already happens informally, but this would be a detailed, crowd sourced wiki. In order to encourage “redemption arcs” it would also record when previous denialists flipped.
But the website could also be used for other issues: wars, the trans panic and so on.
Vastly reduced land use due to new methods of food production.
Agricultural land use could drop a lot. If everyone ate vegan, land use would fall by 75%. I’m not sure how much land use would fall if everyone ate lab-grown meat, but I’m guessing it’s a lot
You’ve probably heard of lab-grown meat, but there are even more exotic alternatives possible. For example, making food using air, electricity, and bacteria. A fellow called Dorian Leger is quoted here as estimating that such a method could produce the same amount of food as soy farming at one-tenth the land use. Almost everything needed for food is present in the air, perhaps in the future, we will even be able to assemble proteins, sugars, and fats directly.
A world in which land use plummeted is fascinating to think about. Presumably, it would be an ecological boon. However, the cascading effects on rural communities could wipe many out many such communities entirely- though it’s possible that the rise of telecommuting might counteract that.
Interesting thoughts!
On the lie detector part, my view on this (as a person loosely qualified to have opinions on this lol) is that I think we will have transformative changes to society long before we have technology like what is imagined about lie detectors. Specifically, I think there are assumptions about how the mind works which make it seem like this is a more attainable invention than it would be. For example:
- The idea that the mind contains a single running monologue that matches the output of a person's speech, and to find out if they are "really lying" all you need to do is check if their inner mental monologue matches their speech output.
- The idea that statements that a person would agree to are all stored in a person's mind somewhere.
- The idea that the mind contains a list of facts about the world with assigned strengths of beliefs a la "Tractatus" Wittgenstein.
I think the vision of a lie detector is that you could extract these kinds of linguistic information from a person's mind. But I don't think the mind is structured in the sort of linguistic way that would allow this. I think the mind is structured more as a loose mess of associations (A brings up B brings up C), and the linguistic output we observe is a sort of chaotic Rube Goldberg type construction on top of this mess of associations which makes it look like the mind makes way more sense than it does. Similar to "Philosophical Investigations" era Wittgenstein, I think there are many "beliefs" that people would agree to if prompted by someone that are not actually stored anywhere in the mind. They are "just in time compiled" let's say, when someone asks you the question or solicits the behavior.
I think something approximating the imagined lie detector technology might someday exist, but I think our society is likely to look fundamentally completely different by that point.
__
As to the part about the role of scholar-official intellectual elite types in our short term future society, I think that's a pretty interesting question and close to my heart lol. I think there is probably some sort of symbiotic relationship where the intellectuals either arguing for or against the System as it is justify it. I've been reading some books about Asian history lately, and it brings to mind this thought I had while reading about the importing of Chinese ideologies into Nara/Heian era Japan, how they kind of imported all these ideologies at once--both the ones that prop up the system (Confucianism) and the ones that claim to surpass and transcend the system (Buddhism, Taoism), which really served to prop up the system at a different level. The idea that came to my mind was that this combination of ideologies and counter ideologies was probably even more potent in perpetuating the hierarchy/social order than a single ideology would be. But anyways, maybe the elite will keep us around to justify their existence even if we do it just by arguing against the system lol.
__
I guess the other general strand of wondering how the future would be in the long term is the possibility that our AI descendants just kind of live their best life and treats us the way we treat apes or other animals--either ignoring them, or putting them in like an enriched environment that meets their psychological needs while being totally artificial.
Thanks for writing!
I'm not sure that the following is likely to be true.
">There would be few false convictions and a much higher conviction rate for criminals (even if criminals have a first amendment right to refuse the truth machine, their witnesses testimony will be more readily believed). The flow on effects for how people view the criminal justice system is hard to know. On one hand, people might feel safer and more certain wrongdoers would be punished. This could lead to a more merciful justice system. On the other hand, with uncertainty banished from the criminal justice process, the vengeful might feel like one of the few important reasons to “spare the rod” was gone, and they could indulge their vengeful desires without concern that the convicted might be innocent."
That testimonies are more likely to be believed is certainly true. But it doesn't then follow that the convictions gravitate towards their 'correct' rate, simply because testimonies may not themselves be correct. As another commenter suggests, highly confident delusions would pass a lie-detector test, which is on the more extreme end of confident falsehoods. On the less extreme end is eyewitness testimony. We know that eyewitness testimony is riddled with non-trivial errors and falsities. A working lie-detector might eliminate a propensity to mislead or deceive on the stand, but it wouldn't eliminate confident beliefs in untruths.
As a result, juries might be more likely to agree with eyewitness testimonies even when they are wrong, which could push the false conviction rate upwards.