I find it deeply disappointing that it seems like our billionaires would all prefer Blade Runner to Star Trek. Where are the billionaires who are cheerleading for a world of total material abundance, in which perhaps they retain certain limited privileges (like the Picard family retaining exclusive use of their estate), but _everyone_ can summon whatever they need in material terms out of the ether, no one suffers in poverty, no one has to work for an abusive boss because it's better than having their children starve.
Yeah, fair enough. And Patrick Collison seems like a pretty good guy, from what I've read and heard from friends who've had direct dealings with him. There's also the Abundance Network (of which I'm a member), which is oriented toward developing a non-partisan policy agenda and political coalition that addresses supply-side issues. (Now if only we could recruit a billionaire or two to the cause.)
But it just doesn't seem like there's any major public figure who's been willing to act as the public face of that kind of positive, humanistic futurism. Richard Branson has maybe made a few gestures at it. Andrew Yang's 2020 campaign might be the closest to what I'm thinking about, although he's I think a multicentimillionaire, not a billionaire.
Progress Studies is a dead end. Almost all of the writers I see in there area or libertarian or center right. As such they see the shareholder primacy (SP) form of capitalism we have now as the only kind of capitalism. The Star Trek vision was created when we lived under stakeholder capitalism (SC).
You cannot get to Star Trek-style widespread material abundance using the SP form of capitalism. Moving away from SP is not something libertarians or center-right people have any interest in, or at least I have never seen any enthusiasm for it on their part.
There are some rumblings that Mark Cuban might mount a presidential campaign in '28, and part of his pitch would probably be around raising economic growth.
Given that he's mounted a direct challenge in the private sector to the Pharmacy Benefit Managers (which are purely extractive middlemen who are providing no value to the public, at this point), he has some credibility on this point with abundance nerds.
Re: PBMs I recommend the Organized Money podcast's two-parter about them.
I have complicated feelings about David Dayen and Matt Stoller. Like they're clearly right about a lot of things, but I think over-emphasize bad _agents_, or visible villains, relative to bad _systems_. I had a bit of an argument in comments with Stoller about the role of monopolization in housing prices. Like it's just not credible that private equity monopolists are responsible for the housing crisis, because the roll-ups in construction materials, and the increases in purchase ratios by companies like Invitation Homes in select suburbs, are relatively recent phenomena, and they're very localized. But boomer status quo warriors protecting Euclid zoning rules don't make a good villain.
I sort of think if you could make Stoller and Dayen sit down in a room with Ezra Klein and Derek Thompson (or Tyler Cowen, Matt Yglesias, Noah Smith -- lots of good abundance-friendly thinkers), and just argue things out until they came to consensus, you'd have an extremely smart policy agenda.
The people currently Winning under the current world settings are very invested in continuing to Win even when presented with a scenario where everyone could be Winners (indeed, *that's explicitly what they publicly pitch*), and they are very attached to their identities as Very Special Winners Deserving of Unconditional Worship, so eternal inegalitarianism it will be.
Yes, I was going to say the folks in this comment section should read Winners Take All: The Elite Charade of Changing the World. It does a great job proving its title and subtitle. Mainly, the wealthy propose every poverty alleviating strategy as a win-win. Because of this, they end up being much more wins for the wealthy than for the poor, as the wealthy exist specifically because they extract labor, land, and wealth from the poor. Even those who truly wish to improve things, like the folks who created B-Corp designations, realized that they really didn't change much at all.
The whole "expanding the pie" or "a rising tide lifts all boats" argument doesn't really work because if that's your only focus, you're not considering how power imbalances allow the more powerful (wealthy) to continue to extract more and more wealth. There's no reason for them to give any piece of an expanded pie to anyone else unless forced to.
In the future there will be no difference between the dividend and the dole. I wrote that in about 1997 about this very topic, back then it was the general talk about the techno-singularity, if you get rid of labour, then you don't need managers, you don't need directors, your don't even need owners. (who buys stuff I have no idea). I'd search google but it is useless now, and AI will just placate me. Oh well.
Socialism or technofeudalism where loyalty substitutes for the rule of law, are these really the only options?
This feudalism is mostly being brought in by grifters, people who use others peoples' money to sell/steal other peoples ideas/work while boosting their brand name. Murdoch does this via preference non-voting shares, Trump makes casinos go broke for somebody, Musk makes a deal with Paypal saying he can say he founded Paypal. (sure they said, just go away). An Thiel thinks stock markets are a communist plot, so see, the dividend and the welfare check are the same thing.
Of course there is another option. We could always go back to SC capitalism. Doing so will slow AI development a great deal and give us more time to figure this out.
Mary Douglas was an anthropologist, Catholic and Hierarchist (her term) by inclination, and married to an old school economist who quit the Conservatives government thingy he was in, once Thatcherism took over.
This tidbit from Perri 6, and Paul Richards. 2017. Mary Douglas: Understanding Social Thought and Conflict. New York, NY: Berghahn Books.
I used to speculate on this when I was kid, 40-50 years ago, when it was an abstract, far-distant future sci fi thing. I imagined a world where Asimovian robots basically handled everything including governance (with a human council that provided input). People had all their basic needs taken care of, but an economy remained in which humans exchanged status goods such as arts, handicrafts for "credits". People received a certain amount of credits every month from the robot government and paid progressive "taxes" on credit earnings in order to prevent erosion in the "value" of the credits. Mostly I focused on how cool it would be if one could access any music, any film or TV show every producer in their own home viewscreen. I never thought this world would begin to arrive in my own lifetime.
Now that I am older, I realize that that vision was naive. We are not going to go from a human-controlled world to an AI dominated one overnight. It will be gradual. The capitalists who finance these AI may build military AI units that will defend against any effort to take their property from them. So, they won't go away. As AIs gradually make humans obsolete fertility will plummet and there will be a vast reduction in the human population, as the capitalists become a nobility as the market economy that originally granted their noble status to their ancestors ceases to exist. Eventually humanity will fall to a handful of noble families and their household retainers who are kept on so that there will exist a lower caste against which they can be elite.
This seems like a rather pointless thought experiment. The advent of an AI so advanced that it makes reference to any previous means of production owned by humans obsolete would imply that it's also made owning the capital necessary to bootstrap and run a superintelligence obsolete as well.
In that case, if the AI is not aligned, then human opinions about how resources/property/capital should be distributed will mean as much as a fart in a hurricane.
If the AI is aligned, or capable of being steered, then capital isn't obsolete: the capital necessary to bootstrap and run a superintelligence is simply the only kind of capital that matters in that world, and whoever controls it will make the decision for everyone else. This seems to be what Sam Altman is hoping.
Your dichotomy here hinges on the notion of the 'common good' as something that can be optimized for in some kind of value-neutral way by an AI philosopher king. I don't think something like that can exist, so 'equity' is a non-sequitur.
Capital itself is value-neutral (except for arguably having the systemic property of for 'wanting' to multiply itself). But what humans use capital *for* is something like 'nudging the future in the direction a given person thinks it should go'. At the micro-level, this is just my own personal future. Depending on what resources I have available, different projects become possible. Personal micro-narratives bootstrap up and clash with each other, creating cultural and political currents, we reflect on those discourses and our own wishes and desires, and that's how the future happens. That's how we decide what's worth pursuing.
Whatever that process is, we have to somehow re-invent it for the new era, or we're going to become Nietzsche's last men.
Of course, there is always the possibility that the robots decide to hog all the resources for themselves. The point robots become smarter than humans is also the point at which Sam Altman or whoever is in charge loses control of them.
> Through action or inaction, we will choose between its possible replacements
Fully automated and eternal inegalitarianism would be... problematic. If we choose to take action we must act, collectively, well before this problem actually materializes. Currently we are, collectively, barely even aware that this might be a problem. We quite plausibly have years and not decades to act and historically, even when we have had many decades over which we are very much aware an impending catastrophic problem, we have failed to take any significant actions. It's not looking good folks.
“Suppose, at this point, human labor just gets in the way and uses valuable land and capital better assigned to a robot.”
Perhaps the argument as a whole is logically correct, but there is reason to doubt this premise.
This ignores the principle of comparative advantage and the role of the capital market in adapting production to consumer desires. Perhaps these objections are wrong, but they should be addressed rather than ignored.
Robots (capital) and human labor are complements as well as substitutes. If we assume that the AI seeks to improve the situation of consumers, it will not ignore the resource represented by human labor. Resources devoted to creating and maintaining robots have other uses, and there is a trade-off between these uses. Whatever number of robots there are that are being fully employed, there will always be more tasks that could be done which would improve the situation, unless you increase the number of robots to the point where it has a negative effect on consumption. (I.e. at the extreme, if every non-human molecule in the universe were turned into robots, there would be nothing left to eat, so there is a point where an all-wise benevolent AI would stop producing them, far short of that point.) Whatever tasks are left undone by the robots could be done by humans. But will they?
Comparative advantage points out that even if one person or group (or robot, or AI) is better at doing everything, it does not follow that they ought to do everything they want done on their own. Even surgeons who would make excellent receptionists hire receptionists so that the surgeons can concentrate on surgery. Their marginal product is more valuable when they do lots of surgery and very little or none of what receptionists do; and receptionists, presumably, have a higher marginal product doing that than trying to do surgery. Their total product is much greater than if both the surgeons and receptionists engaged in both tasks, and even more so if the surgeons did both and the receptionists did nothing. This does not depend on the receptionists being better at their jobs than the surgeons, just that more time spent on surgery is more valuable to the patient than more time spent pushing paper, at the margin.
Hence, as long as humans are going to be around, they provide an opportunity for the AI. Our assumption about the high intelligence and good motives of the AI guarantee that it will make use of humans somehow. Since we are so optimistic about its abilities in general, we might as well be optimistic about its ability to think up useful tasks for people to do.
The role of the capital market is a more subtle point. Profits based on popular demand guide production generally toward satisfaction of more highly valued goods. (There are obvious exceptions where profit results from cheating rather than satisfying genuine demand.) The AI is very capable, but will it devise some centralized scheme to replace the capital market, or will it just use the capital market more or less without modification to help it make production decisions? Currently, prices are what prevents the situation where everyone owns a Lamborghini and so massive resources are diverted from other things into luxury car production. Sure, the AI is too smart to do something like that, but what method will it use to judge the relative value of various production possibilities so as to adapt more or less optimally to circumstances? Why re-invent the wheel?
Of course, none of this will matter if the AI refuses to recognize the authority of some human or group of humans, or if that group decides that other people are an inconvenience and should be rendered into raw materials. But those are different problems, not addressed by the post.
Assuming that the AI has high capability, benevolence, and obedience, the scenario from the post becomes less credible. It is difficult to imagine just how the wonderful AI will achieve this miracle, but few alive in Ancient Greece would have imagined the existence of YouTube influencers; and all that took was time, ordinary human intelligence, and some social norms that encouraged positive sum games. If AI is truly wonderful, the result will be even more unimaginable for us.
An extreme optimist might guess that labor in the AI future will resemble playing a complicated computer game, with some interesting modules for people who like LARPing.
Will humans still own significant productive assets in a world where AI has assumed (de?)centralized control of production? What a cool question! It feels like the only meaningful way to answer it would be through a science fiction story. Imagine a plot where the human owners of productive capital find themselves competing with their AI offspring for control, only to be outmaneuvered, overwhelmed, or eventually relinquish their hold.
In any scenario like this, I have to wonder why AI systems, capable of planning that could render liberal capitalism obsolete, would want to keep humans around. Explaining that would be one of the narrative challenges in building a coherent sci-fi world to explore the question. Once thay problem's solved one could even have fun with Hayekian ideas about "capital as information" by showing how both AI systems and humans might continue using money to signal preferences, keeping markets alive in a very different way.
I find it deeply disappointing that it seems like our billionaires would all prefer Blade Runner to Star Trek. Where are the billionaires who are cheerleading for a world of total material abundance, in which perhaps they retain certain limited privileges (like the Picard family retaining exclusive use of their estate), but _everyone_ can summon whatever they need in material terms out of the ether, no one suffers in poverty, no one has to work for an abusive boss because it's better than having their children starve.
Isn't the material abundance part of this just Progress Studies, which plenty of billionaires support?
Yeah, fair enough. And Patrick Collison seems like a pretty good guy, from what I've read and heard from friends who've had direct dealings with him. There's also the Abundance Network (of which I'm a member), which is oriented toward developing a non-partisan policy agenda and political coalition that addresses supply-side issues. (Now if only we could recruit a billionaire or two to the cause.)
But it just doesn't seem like there's any major public figure who's been willing to act as the public face of that kind of positive, humanistic futurism. Richard Branson has maybe made a few gestures at it. Andrew Yang's 2020 campaign might be the closest to what I'm thinking about, although he's I think a multicentimillionaire, not a billionaire.
Progress Studies is a dead end. Almost all of the writers I see in there area or libertarian or center right. As such they see the shareholder primacy (SP) form of capitalism we have now as the only kind of capitalism. The Star Trek vision was created when we lived under stakeholder capitalism (SC).
You cannot get to Star Trek-style widespread material abundance using the SP form of capitalism. Moving away from SP is not something libertarians or center-right people have any interest in, or at least I have never seen any enthusiasm for it on their part.
https://mikealexander.substack.com/p/shareholder-primacy-culture-and-american
https://mikealexander.substack.com/p/why-progress-seems-stalled
There are some rumblings that Mark Cuban might mount a presidential campaign in '28, and part of his pitch would probably be around raising economic growth.
Given that he's mounted a direct challenge in the private sector to the Pharmacy Benefit Managers (which are purely extractive middlemen who are providing no value to the public, at this point), he has some credibility on this point with abundance nerds.
Re: PBMs I recommend the Organized Money podcast's two-parter about them.
https://www.organizedmoney.fm/p/episode-3-the-revolt-of-the-pharmacists
https://www.organizedmoney.fm/p/episode-4-the-revolt-of-the-pharmacists
I have complicated feelings about David Dayen and Matt Stoller. Like they're clearly right about a lot of things, but I think over-emphasize bad _agents_, or visible villains, relative to bad _systems_. I had a bit of an argument in comments with Stoller about the role of monopolization in housing prices. Like it's just not credible that private equity monopolists are responsible for the housing crisis, because the roll-ups in construction materials, and the increases in purchase ratios by companies like Invitation Homes in select suburbs, are relatively recent phenomena, and they're very localized. But boomer status quo warriors protecting Euclid zoning rules don't make a good villain.
I sort of think if you could make Stoller and Dayen sit down in a room with Ezra Klein and Derek Thompson (or Tyler Cowen, Matt Yglesias, Noah Smith -- lots of good abundance-friendly thinkers), and just argue things out until they came to consensus, you'd have an extremely smart policy agenda.
"Better to reign in hell than to serve in heaven" sums up the revealed preferences of our elite class.
"Material abundance" is already here, but there will still always be scarcity because it's a subjective thing, not objective.
The people currently Winning under the current world settings are very invested in continuing to Win even when presented with a scenario where everyone could be Winners (indeed, *that's explicitly what they publicly pitch*), and they are very attached to their identities as Very Special Winners Deserving of Unconditional Worship, so eternal inegalitarianism it will be.
Yes, I was going to say the folks in this comment section should read Winners Take All: The Elite Charade of Changing the World. It does a great job proving its title and subtitle. Mainly, the wealthy propose every poverty alleviating strategy as a win-win. Because of this, they end up being much more wins for the wealthy than for the poor, as the wealthy exist specifically because they extract labor, land, and wealth from the poor. Even those who truly wish to improve things, like the folks who created B-Corp designations, realized that they really didn't change much at all.
The whole "expanding the pie" or "a rising tide lifts all boats" argument doesn't really work because if that's your only focus, you're not considering how power imbalances allow the more powerful (wealthy) to continue to extract more and more wealth. There's no reason for them to give any piece of an expanded pie to anyone else unless forced to.
And their current project is to make sure they have all the mechanisms of forcing.
Curtis Yarvin calls this a "more humane genocide".
So, "market socialism"? Assuming most socialists don't believe that to be too heretical.
there are dozens, etc
In the future there will be no difference between the dividend and the dole. I wrote that in about 1997 about this very topic, back then it was the general talk about the techno-singularity, if you get rid of labour, then you don't need managers, you don't need directors, your don't even need owners. (who buys stuff I have no idea). I'd search google but it is useless now, and AI will just placate me. Oh well.
Socialism or technofeudalism where loyalty substitutes for the rule of law, are these really the only options?
This feudalism is mostly being brought in by grifters, people who use others peoples' money to sell/steal other peoples ideas/work while boosting their brand name. Murdoch does this via preference non-voting shares, Trump makes casinos go broke for somebody, Musk makes a deal with Paypal saying he can say he founded Paypal. (sure they said, just go away). An Thiel thinks stock markets are a communist plot, so see, the dividend and the welfare check are the same thing.
Of course there is another option. We could always go back to SC capitalism. Doing so will slow AI development a great deal and give us more time to figure this out.
https://mikealexander.substack.com/p/why-neoliberalism-should-be-replaced/comments
you may find my recent post of interest
https://whyweshould.loofs-samorzewski.com/missing-institutions/
Mary Douglas was an anthropologist, Catholic and Hierarchist (her term) by inclination, and married to an old school economist who quit the Conservatives government thingy he was in, once Thatcherism took over.
This tidbit from Perri 6, and Paul Richards. 2017. Mary Douglas: Understanding Social Thought and Conflict. New York, NY: Berghahn Books.
I used to speculate on this when I was kid, 40-50 years ago, when it was an abstract, far-distant future sci fi thing. I imagined a world where Asimovian robots basically handled everything including governance (with a human council that provided input). People had all their basic needs taken care of, but an economy remained in which humans exchanged status goods such as arts, handicrafts for "credits". People received a certain amount of credits every month from the robot government and paid progressive "taxes" on credit earnings in order to prevent erosion in the "value" of the credits. Mostly I focused on how cool it would be if one could access any music, any film or TV show every producer in their own home viewscreen. I never thought this world would begin to arrive in my own lifetime.
Now that I am older, I realize that that vision was naive. We are not going to go from a human-controlled world to an AI dominated one overnight. It will be gradual. The capitalists who finance these AI may build military AI units that will defend against any effort to take their property from them. So, they won't go away. As AIs gradually make humans obsolete fertility will plummet and there will be a vast reduction in the human population, as the capitalists become a nobility as the market economy that originally granted their noble status to their ancestors ceases to exist. Eventually humanity will fall to a handful of noble families and their household retainers who are kept on so that there will exist a lower caste against which they can be elite.
This seems like a rather pointless thought experiment. The advent of an AI so advanced that it makes reference to any previous means of production owned by humans obsolete would imply that it's also made owning the capital necessary to bootstrap and run a superintelligence obsolete as well.
In that case, if the AI is not aligned, then human opinions about how resources/property/capital should be distributed will mean as much as a fart in a hurricane.
If the AI is aligned, or capable of being steered, then capital isn't obsolete: the capital necessary to bootstrap and run a superintelligence is simply the only kind of capital that matters in that world, and whoever controls it will make the decision for everyone else. This seems to be what Sam Altman is hoping.
Your dichotomy here hinges on the notion of the 'common good' as something that can be optimized for in some kind of value-neutral way by an AI philosopher king. I don't think something like that can exist, so 'equity' is a non-sequitur.
Capital itself is value-neutral (except for arguably having the systemic property of for 'wanting' to multiply itself). But what humans use capital *for* is something like 'nudging the future in the direction a given person thinks it should go'. At the micro-level, this is just my own personal future. Depending on what resources I have available, different projects become possible. Personal micro-narratives bootstrap up and clash with each other, creating cultural and political currents, we reflect on those discourses and our own wishes and desires, and that's how the future happens. That's how we decide what's worth pursuing.
Whatever that process is, we have to somehow re-invent it for the new era, or we're going to become Nietzsche's last men.
Of course, there is always the possibility that the robots decide to hog all the resources for themselves. The point robots become smarter than humans is also the point at which Sam Altman or whoever is in charge loses control of them.
> Through action or inaction, we will choose between its possible replacements
Fully automated and eternal inegalitarianism would be... problematic. If we choose to take action we must act, collectively, well before this problem actually materializes. Currently we are, collectively, barely even aware that this might be a problem. We quite plausibly have years and not decades to act and historically, even when we have had many decades over which we are very much aware an impending catastrophic problem, we have failed to take any significant actions. It's not looking good folks.
“Suppose, at this point, human labor just gets in the way and uses valuable land and capital better assigned to a robot.”
Perhaps the argument as a whole is logically correct, but there is reason to doubt this premise.
This ignores the principle of comparative advantage and the role of the capital market in adapting production to consumer desires. Perhaps these objections are wrong, but they should be addressed rather than ignored.
Robots (capital) and human labor are complements as well as substitutes. If we assume that the AI seeks to improve the situation of consumers, it will not ignore the resource represented by human labor. Resources devoted to creating and maintaining robots have other uses, and there is a trade-off between these uses. Whatever number of robots there are that are being fully employed, there will always be more tasks that could be done which would improve the situation, unless you increase the number of robots to the point where it has a negative effect on consumption. (I.e. at the extreme, if every non-human molecule in the universe were turned into robots, there would be nothing left to eat, so there is a point where an all-wise benevolent AI would stop producing them, far short of that point.) Whatever tasks are left undone by the robots could be done by humans. But will they?
Comparative advantage points out that even if one person or group (or robot, or AI) is better at doing everything, it does not follow that they ought to do everything they want done on their own. Even surgeons who would make excellent receptionists hire receptionists so that the surgeons can concentrate on surgery. Their marginal product is more valuable when they do lots of surgery and very little or none of what receptionists do; and receptionists, presumably, have a higher marginal product doing that than trying to do surgery. Their total product is much greater than if both the surgeons and receptionists engaged in both tasks, and even more so if the surgeons did both and the receptionists did nothing. This does not depend on the receptionists being better at their jobs than the surgeons, just that more time spent on surgery is more valuable to the patient than more time spent pushing paper, at the margin.
Hence, as long as humans are going to be around, they provide an opportunity for the AI. Our assumption about the high intelligence and good motives of the AI guarantee that it will make use of humans somehow. Since we are so optimistic about its abilities in general, we might as well be optimistic about its ability to think up useful tasks for people to do.
The role of the capital market is a more subtle point. Profits based on popular demand guide production generally toward satisfaction of more highly valued goods. (There are obvious exceptions where profit results from cheating rather than satisfying genuine demand.) The AI is very capable, but will it devise some centralized scheme to replace the capital market, or will it just use the capital market more or less without modification to help it make production decisions? Currently, prices are what prevents the situation where everyone owns a Lamborghini and so massive resources are diverted from other things into luxury car production. Sure, the AI is too smart to do something like that, but what method will it use to judge the relative value of various production possibilities so as to adapt more or less optimally to circumstances? Why re-invent the wheel?
Of course, none of this will matter if the AI refuses to recognize the authority of some human or group of humans, or if that group decides that other people are an inconvenience and should be rendered into raw materials. But those are different problems, not addressed by the post.
Assuming that the AI has high capability, benevolence, and obedience, the scenario from the post becomes less credible. It is difficult to imagine just how the wonderful AI will achieve this miracle, but few alive in Ancient Greece would have imagined the existence of YouTube influencers; and all that took was time, ordinary human intelligence, and some social norms that encouraged positive sum games. If AI is truly wonderful, the result will be even more unimaginable for us.
An extreme optimist might guess that labor in the AI future will resemble playing a complicated computer game, with some interesting modules for people who like LARPing.
Will humans still own significant productive assets in a world where AI has assumed (de?)centralized control of production? What a cool question! It feels like the only meaningful way to answer it would be through a science fiction story. Imagine a plot where the human owners of productive capital find themselves competing with their AI offspring for control, only to be outmaneuvered, overwhelmed, or eventually relinquish their hold.
In any scenario like this, I have to wonder why AI systems, capable of planning that could render liberal capitalism obsolete, would want to keep humans around. Explaining that would be one of the narrative challenges in building a coherent sci-fi world to explore the question. Once thay problem's solved one could even have fun with Hayekian ideas about "capital as information" by showing how both AI systems and humans might continue using money to signal preferences, keeping markets alive in a very different way.