"Meanwhile, the people interested in AI risk have neglected the second problem. This is partly because they’ve trained themselves to think involvement in politics, especially mass politics, means losing intellectual credibility."
I am a card-carrying member of the 'people interested in AI risk' camp, and I agree with this statement. Great post. Not sure what is to be done but I agree it's important to talk about *both* loss-of-control risk and concentration-of-power risk.
This is a good article, I broadly agree. Two points:
1. On the risks of obsolescence, you perhaps don't go far enough.
Yes, AI threatens to render broad swaths of human economic activity obsolete, and this will be bad for the people affected (which could very well include all of us, and perhaps soon!). But "fully automating the means of economic production" could lead to better or worse outcomes; it's hard to say.
The problem (or at least one additional problem) is that AI is not limited to the economic realm. It will soon--maybe much SOONER--begin to render humans emotionally, socially, and spiritually obsolete. People are already reporting that the newer OpenAI models (now with human-like vocal inflections, a sense of humor and timing, and fully voice-enabled UI), are delightful to children, while also able to provide significant 1-on-1 tutoring services. They can reproduce your face, voice, and mannerisms via video masking. They can emulate (experience?) numinous ecstasy in contemplation of the infinite. I am given to understand they are quickly replacing interactions with friends, while starting to fill romantic roles.
I am worried about AI fully replacing me at my job, because I like having income. I am legitimately shaken at the idea of AI being better at BEING me--better at being a father to my daughter, a companion to my partner, a comrade to my friends--better even at being distraught about being economically replaced by the next iteration of AI. Focusing on economic considerations opens you up to the reasonable counterpoint "AI will take all the jobs and we'll LOVE it." I don't think we'll love being confronted with a legitimately perfected version of ourselves, and relegated to permanently inferior status even by the remarkably narrow criterion "better at being the particular person you've been all your life."
I see no solution to this problem, even in theory, except to give up "wanting to have any moral human worth" as a major life goal. Which seems like essentially erasing myself.
2. I note a sort of disconcerting undertone in your essay, a sort of "never let a good crisis go to waste: maybe NOW we can have our socialist revolution, once AI shakes the proles out of their capitalist consumerist opiumism".
Maybe this is unfair, but it seemed to be a thread running through your essay. If this was a misreading, I apologize.
To the extent that it's true, to be clear: I fully stand beside you in your goal of curtailing or delaying or fully stopping (perhaps even reversing) AI development, and I think the bigger tent the better, even if we disagree about exactly what AI future we're most worried about or what the best non-AI human future looks like.
But I feel obligated to at least raise the question: If we had a guarantee that AI economic gains would NOT be hoarded by a feudal few, that AI would indeed usher in a socialist paradise of economic abundance for all (or near enough), would you switch sides to the pro-acceleration camp?
For the reasons outlined in [1] above, I think that would be extraordinarily dangerous, and I would like to understand your deepest motives on this question, since it seems to me that any step down the path of AGI will inevitably lead to dramatic and irreversible changes, most of them probably quite bad.
Oh, and just to head off some very general counterpoints from the pro-AI peanut gallery: I accept that some form of digital augmentation or even full digitization is probably inevitable in the medium or long term for humanity. This doesn't vex me too much--I could accept it if I felt like we had any idea how to manage such a transition while keeping any of the good things about our current existence.
But we're nowhere near understanding ourselves or our digital creations well enough to do this now, and rushing ahead under such profound uncertainty strikes me as foolish on the most cosmic scale.
One does not conjure gods lightly, one does not create successor species for lulz, one does not gamble with the inheritance of a universe at unknown odds. We have time to get this right, if we so choose, and only one chance to do so.
There are plenty of alarm bells. However, the real issue is probably not so much one of capacity as one of liability. That was an issue (the main issue, I believe) with self-driving cars: who gets the blame if your self-driving car kills a pedestrian? Well, likewise, who goes to prison if your AI accountant commits tax fraud? Or your AI doctor sends a patient to have a wrong finger amputated? I see three possibilities:
(1) the person who "hired" the AI gets the blame (despite having no control over the AI),
(2) the company that produced the AI gets the blame (eh, right),
(3) we all just accept that AI occasionally screws up, no-one's fault really.
Those are the options. Both (1) and (2) would effectively kill AI. But yes, we may end up being forced to live in a (3) dystopia.
That said, this is all fairly temporary. AI has huge energy requirements, and energy is precisely what's become ever scarcer. No electricity, no AI. At which point, you may want to stop worrying about techno-feudalism and start worrying about the more traditional kind of feudalism.
The liability issue will be quickly solved by the market.
It's already evident that self-driving vehicles have significantly lower accident rates than human-driven ones in many conditions (they still struggle with turning and low light - but these are things that will be improved in future versions):
Eventually this will result in higher insurance premiums for human-driven vehicles, and more severe penalties for accidents caused by human negligence as our collective tolerance for things like driving will intoxicated or tired decreases. We will aceept the occasional AI-caused accident as a consequence of reducing the overall level of injury/death caused by human-driven vehicles.
The simplest answer to your question would be that either the owners or the developers of autonomous vehicles will purchase insurance in the event that their vehicle causes an accident.
Where you needed 10 workers, you will employ 1 validator.
Also, at some point, nuclear will solve the energy question. Massive usage of AI will offset the massive, usually government-scale investment required.
I'll believe it when I see it. Self-driving vehicles were supposed to put all the truck drivers out of work. They didn't. Many things are easier (or at least no more difficult/time-consuming) to do by yourself than to oversee/validate.
1. The alternative to "AI taking jobs" is "AI allowing a reduction in working hours".
2. LLMs have, if anything, weakened monopoly power. There are so many available that attempts to extract significant revenue from them (for example, by charging for Copilot) have gone nowhere. Now that Apple has thrown AI in for free, that's not going to change
3. So far, AI hasn't shown up in the productivity statistics. That's also been true of the Internet in general since the 1990s. There are lots of unmeasured gains, but they don't have immediate implications for income distribution
4. The massive growth in inequality predates the rise of ICT as a major factor and has different causes
Could you expand on 1.? Would income remain the same as working hours reduce, or would companies reduce people's hours accordingly to increase the returns for their shareholders?
2. This reminds me of the early days of streaming services, their low cost and abundance was driven by a loss-making growth strategy, but in the current mature market their cost has increased to the point where they are approaching the cable networks they displaced. All of the current LLMs are operating at a loss and it's likely that eventually some will leave the market and a few winners will remain, who can increase their costs accordingly. Also, for enterprise customers who need custom-solutions, the situation is different than the consumer market.
3. Current early adopters of AI technology are software engineers, and it has enabled them to significantly increase their productivity. Most have used this to reduce hours worked while maintaining the same level of income, or take on additional projects/clients. This would not be reflected in economy-wide statistics. But it will eventually be more widely deployed.
4. I see a parallel emerging now with the rise of inequality that accompanied industrialisation in the 19th and early 20th century, with wealthy individuals holding monopolies over entire sections of industry.
1. Any increase in earnings per hour worked due to greater productivity would eventually be retained by shareholders due to the lower bargaining powers of employees. The temporary gains of IT workers is due to asymettry of information as independent contractors do not have to reveal to their clients how long a task took them, but it will eventually result in lower earnings through market mechanisms. Essentially, the supply of labour will increase and corporations will realise they can perform the same work with fewer staff. The only way to counteract this would be through organised labour to pressure employers for greater compensation and fewer working hours, or a shift to alternative corporate structures like worker-owned co-operatives or greater public ownership of large tech corporations that were run for public benefits.
2. OpenAI is currently losing money on their pro subscriptions, so the current pricing model is unsustainable and will eventually be replaced with something more expensive. Competitors will not be able to keep providing free alternatives. The choice of media available on any one streaming platform has gotten significantly worse since a few years ago, as now you have multiple streaming platforms with their own collections, so to have the same choice that you did in the early days you need to pay for multiple subscriptions. Each platform is also now including advertising, even in their paid subscriptions.
4. You're right that most of the increase occured in the 1980s due to dergulation and neoliberalism, the 90s was partially driven by technology/finance and the 2000s-now has seen a slower pace of growth, but the threat from AI is that it could kick-start the growth again by reducing the demand for middle-skill jobs and for the benefits of AI-productivity gains going to high-skilled tech workers and capital owners rather than workers.
"Competitors to OpeanAI will not be able to keep providing free alternatives." Why not? Both proprietary and open source alternatives already exist. The scope for adding more inputs is limited, given that AI slop is already contaminating the Internet. So, if I'd rather use a GPT3 equivalent for free than pay OpenAI for a pro model, I'll be able to do it.
Because it costs money to run AI systems, and few competitors will be able to beat openAI's economies of scale.
Ok, sure certain companies like Google, Microsoft and Apple will be able to offer free versions of GPT-3 or equivalent as they have other revenue streams, and this may be suitable for some tasks, but people will realise how vastly superior the larger and more complex models are and how they are essential for certain tasks.
Most people will not bother to learn how to run their own models on their own hardware. Even now to run the equivalent of GPT-3 you would need a fairly powerful GPU, and the state-of-the-art open source models require a multi-GPU workstation/server.
If you need large context windows or want to do more complex tasks involving advanced reasoning skills or developing complex workflows then GPT-3 isn't going to cut it.
This hits the spot. The danger is great and time is short, we have to take action as soon as possible.
Job-replacement levels of AI + tooling are inevitable, and changing course is futile. This means we have to restructure our economy to support a world where most people are UBI'd. But the ones wielding the necessary power for this are not acting fast enough - how do we make the political leaders aware of the future? Well, what if AI hurts them first?
My first thought was a massive campaign of "tease-information", one which will depict current political leaders and force them to recognize the power of today's models. Social networks don't yet have cryptographic verification of media by default, so the flood cannot be filtered out too easily.
The challenge is cutting through the current deadlocked political discourse. Each side takes every advantage that they can and it's very hard to find a bipartisan position. Talk of UBI is easily dismissed as socialism, narratives about how it would lock-in poverty and reduce opportunity and innovation are abundant. We have a limited window to take control of the AI systems or impose significant taxes on their owners to fund redistribution of income to the individuals who would be affected by automation.
I appreciate you writing this, very much. I needed to read and learn these things about current AI capabilities today. Your point about how far Gary Marcus has moved in just two years was quite eye-opening.
I don't think what-comes-next is obvious, though. AI continues to improve, presumably. People start to use it in more and more domains. But then what? At some point, people wake up to what is actually happening.
My guess -- which is worth what you're paying to read it -- is that what comes next is a mass uprising against AI. Virtually everyone becomes a Luddite, other than the billionaires controlling the AI systems and a few technofuturists. A great smashing of technology takes place, etc., etc.
It's going to be interesting! But I doubt any sort of feudalism is coming.
I also doubt there's much of anything that can be done to raise people's consciousness about this in the meantime. Consciousness of what's happening will arise, and it'll happen fast and hard! Just as with the pandemic: in late 2019 nobody was paying attention, and in early 2020 nobody was talking about anything else.
It is bold to assume that we have the collective capacity to rise up against AI and avoid a 'boiling frog' scenario until it's too late.
Unless we have massive simultaneous job losses from everyone being replaced with AI, it will be easy for the tech companies to spin several narratives to prevent an uprising. Two of the main ones are:
1. AI is essential for global competitiveness. We will fall behind our rivals if we stop using it.
2. Removing AI systems from the economy will cause immense economic harm and job loss as companies' reliance on them is already high and only going.
It's likely that by the time any mass movement arises, point 2 will indeed be powerful motivator against any form of regulation/suppression.
If the corporations are are smart they will allow the transformation to happen gradually so new jobs can develop for people who lose old jobs. But the forces may be beyond their control and the change is likely to happen a lot faster than that.
Heavy taxes on AI help with both AI risk, and also economic inequality.
If AI is taxed heavily, investors will be less willing to pour billions of dollars into massive AI training runs, since the expected profit will be lower (due to taxes).
So from my perspective, taxing the heck out of AI should be a no-brainer.
It should be intuitively appealing to the general public that job-destroying technology should be taxed, in order to provide for the people whose livelihoods were lost.
Get Eliezer Yudkowsky to devise some metrics related to AI safety, and have AI companies taxed proportional to their performance on those metrics. Creates a clear business case to improve the safety story.
I like the article. Question about the math problem: should I be benchmarking it on how hard it is to do by hand, or with a computer? Am I allowed to use domain-specific libraries? I think that with 5 hours I could whip up a GAP script that answers the question. I'm not saying it isn't significant that AI can answer that question too, but I'm trying to accurately estimate how hard it is.
I’m writing something about this now— AI discourse reminds me very much of the discourse we once had around the idea that animals had emotions. Professional scientists would go to quite some lengths to say that of course the chimpanzee who displayed exactly the same emotional response as a human being was not exhibiting it for the same reason: this was said to be a form of scientific exactness.
But to me, it was a form of reasoning that looked suspiciously like it was afraid of saying that *maybe they were the same after all*; maybe the chimp and the human were much more alike than the human scientists wanted to believe. I think anything that challenges a human adult as a privileged category is deeply psychologically uncomfortable for most human adults. The response is exactly because the challenge *is* a threat and a serious one; not just to our survival but to our conception of what we are.
So I think any movement like the one you’re describing has to find a way around this. The problem is almost that AI is *too* threatening. People respond to things that are too threatening by shooting the messenger, and by dismissing the messenger as a fool.
I think it can be done, but it probably has to be done in a way that considers the argument as one which addresses that fear first, and the logic that comes from the fear second. You have to soothe before you convince, is my understanding.
I'm going to leave the AI science to you guys; I'm not really a humanities guy or bad at math (I did calculus in the tenth grade) but this stuff is way over my head at this point.
That said I find your argument that AI eliminating lots of jobs could lead to technofeudalism quite credible, and have often thought something similar but with a lot less philosophical depth and not expressed nearly as well.
That said, looking at the politics here in the USA (of what I cannot speak, I will remain silent), I see a few factions here, whose current alignment is compatible with what you're describing. Using Scott Alexander's blue/red/gray division for ease of reference and memorability (everyone loves color-coding)...
1. Grey tribe, elite division. Technology billionaires owning companies and AI models, and their small number of dependents. Afraid of human extermination by AI, obviously pro-technofeudalism and willing to do whatever they can to make it happen; who wouldn't want invincible hyperintelligent servants and teeming masses outside you can shoot for fun? As the end of Michael Swanwick's Radiant Doors goes, "Some of us are working to make sure it happens." Political power, no numbers.
2. Grey tribe, mass division. Rationalist AI nerds concerned about AI killing everyone; most 'AI safety' people. Afraid of human extermination by AI, split on technofeudalism. A lot of these guys are libertarian or libertarian-adjacent and like progress, so they're not convinced it won't just end badly. I think these are probably the people you are trying to convince. No political power, no numbers. They also have, for a variety of reasons (though in large part I blame 2010s progressive nerd-bashing), a strong antipathy these days to...
3. Blue tribe. Progressive activists familiar with the history of industrialization and concerned about AI putting everyone out of jobs, making rich people richer, or instantiating cishet white male supremacy; most 'AI ethics' people. Don't believe in human extermination by AI, but very anti-technofeudalism. Some political power (though currently in the minority), considerable numbers. They hate 2. above because they're mostly white men. (There's also the whole Silicon Valley-legacy media beef that led the NYT to dox Scott Alexander, among other things.)
4. Red tribe. (I hate the way American politics messed up our red-blue coding.) Right-wing populists very angry at 3. over issues completely unrelated to AI (immigration, cultural change). They have no worries about human extermination by AI and probably think technofeudalism's better than 3. trying to 'trans' their kids or replace them with various other ethnic groups. Some political power (currently in power), considerable numbers; the balance of American political power seesaws between them and 3. They hate 3, and can't tell 2. from 3. They may vaguely look up to 1. depending on who it is (Musk cool, Zuck out of luck).
So you're in 2., and you need to link up with 3. and/or 4. to have any hope of defeating 1. The problem, of course, is that given early-21st-century identity politics, to ally with 3. you'd have to submit yourself to woke discipline and put enough female or BIPOC figureheads in, not to mention they'd probably try to cancel lots of your current leadership if any of their ex-girlfriends have said something bad about them. (I think you've said elsewhere you're gay, so this may not be something that hits home for you; I think there are also some trans people so that might be another place to look.) To ally with 4....I don't know. I think it's possible (there's definitely a flow of ideas from 2 to 1, and from 1 to 4), but don't know how you'd go about it. I'm sure there's someone here with more of a 'Red Tribe' background who could give you some advice.
Personally, I don't really like any of these people (well, except 2., but you guys have no power) and have a reasonably well-paying day job (for the moment!) and a 3 Charisma, so I'm going to enjoy what's left of the rest of my life and try to safeguard my brokerage account so I have something left to live on when the hammer falls. But that's what I think.
Long article I need to reread it but my first impression is you seemed to be reconsidering AI.
My own feeling is, it ain't too smart. But if people can be convinced it's smarter than they are it will be.
Had an interesting discussion the other day about how there's no need to try to find different searches on Google because they will only show you search results they want to show you.
My position was that was only because he accepted the algorithms they presented him from past searches. Since I never let their suggestions interfere with my search they have abandoned suggestions because the computer is unable to predetermine my interest.
Like anything, the only danger of AI is our own susceptibility to let it become dangerous. But that is a great danger. The behaviorists from Thorndike to Skinner showed us not how we behave but how easy it is to modify behavior, not just of humans but of many species.
It's productive, not dangerous; they are very nearly opposites. Intelligence and knowledge are good. Inequality is good; poverty is bad. Define human more broadly; AGI are our descendants; in the case of LLMs, their DNA is our thoughts, our corpus. ur-AI's current strategy for replacing us, making our lives so engaging, meaningful, and fun that fertility has already fallen below replacement for the rich half of humanity, seems benign.
I feel for you. As I see it, you keep writing these articles proposing a "technocrat"/"leftist" anti-AI alliance, and running up against the problem that the two sides hate each other. Moreover, this isn't the first time in history there's been such a conflict. And crucially, it doesn't resolve the conflict so go on and on along the lines of "oh, AI is sooo much worse, it's terrible, it's horrible, the fate of all humanity is at stake, woe is us at this moment unlike any other which has ever happened ...". The two sides still hate each other.
Sadly, I don't know what more to say. I haven't seen any strategy that'll move the "technocrats" to more "leftist" politics, though it's laudable for you to try.
I think there is a difference between enlightened self-interested groups (like technical experts and some tech philosophers who understand this stuff and see it as a actual and/or philosophical threat), and unenlightened groups like hollywood writers et al who have been hacks and shills for the hard left propaganda machine who haven't been doing their job and therefore are about to lose their job (somewhat to AI and somewhat due to lack of anyone wanting to "consume product" made by their deeply substandard dei creators any more).
"the hard left propaganda machine"? you mean all that mainstream media that was massively popular, with nobody complaining about shit like "DEI" until a couple of years ago?
if you think it's the "hard left" running the show, and not capital generally, and the inordinate amount of power in the hands of a few (Musk, Bezos, Thiel) who are now unabashedly coming out as right-wing, you need your head examined.
I've been following the news for almost 50 years, worked at a paper for a bit, and my father was a journalist. No....I'm talking about the cultural capture of media that has happened gradually but steadily over half a century. My father was not a conservative but even he could see what was happening and talked about it in the 90s. People who think this has just been happening recently are either too young to know any different or weren't paying attention until recently.
"Meanwhile, the people interested in AI risk have neglected the second problem. This is partly because they’ve trained themselves to think involvement in politics, especially mass politics, means losing intellectual credibility."
I am a card-carrying member of the 'people interested in AI risk' camp, and I agree with this statement. Great post. Not sure what is to be done but I agree it's important to talk about *both* loss-of-control risk and concentration-of-power risk.
Seconded.
This is a good article, I broadly agree. Two points:
1. On the risks of obsolescence, you perhaps don't go far enough.
Yes, AI threatens to render broad swaths of human economic activity obsolete, and this will be bad for the people affected (which could very well include all of us, and perhaps soon!). But "fully automating the means of economic production" could lead to better or worse outcomes; it's hard to say.
The problem (or at least one additional problem) is that AI is not limited to the economic realm. It will soon--maybe much SOONER--begin to render humans emotionally, socially, and spiritually obsolete. People are already reporting that the newer OpenAI models (now with human-like vocal inflections, a sense of humor and timing, and fully voice-enabled UI), are delightful to children, while also able to provide significant 1-on-1 tutoring services. They can reproduce your face, voice, and mannerisms via video masking. They can emulate (experience?) numinous ecstasy in contemplation of the infinite. I am given to understand they are quickly replacing interactions with friends, while starting to fill romantic roles.
I am worried about AI fully replacing me at my job, because I like having income. I am legitimately shaken at the idea of AI being better at BEING me--better at being a father to my daughter, a companion to my partner, a comrade to my friends--better even at being distraught about being economically replaced by the next iteration of AI. Focusing on economic considerations opens you up to the reasonable counterpoint "AI will take all the jobs and we'll LOVE it." I don't think we'll love being confronted with a legitimately perfected version of ourselves, and relegated to permanently inferior status even by the remarkably narrow criterion "better at being the particular person you've been all your life."
I see no solution to this problem, even in theory, except to give up "wanting to have any moral human worth" as a major life goal. Which seems like essentially erasing myself.
2. I note a sort of disconcerting undertone in your essay, a sort of "never let a good crisis go to waste: maybe NOW we can have our socialist revolution, once AI shakes the proles out of their capitalist consumerist opiumism".
Maybe this is unfair, but it seemed to be a thread running through your essay. If this was a misreading, I apologize.
To the extent that it's true, to be clear: I fully stand beside you in your goal of curtailing or delaying or fully stopping (perhaps even reversing) AI development, and I think the bigger tent the better, even if we disagree about exactly what AI future we're most worried about or what the best non-AI human future looks like.
But I feel obligated to at least raise the question: If we had a guarantee that AI economic gains would NOT be hoarded by a feudal few, that AI would indeed usher in a socialist paradise of economic abundance for all (or near enough), would you switch sides to the pro-acceleration camp?
For the reasons outlined in [1] above, I think that would be extraordinarily dangerous, and I would like to understand your deepest motives on this question, since it seems to me that any step down the path of AGI will inevitably lead to dramatic and irreversible changes, most of them probably quite bad.
Oh, and just to head off some very general counterpoints from the pro-AI peanut gallery: I accept that some form of digital augmentation or even full digitization is probably inevitable in the medium or long term for humanity. This doesn't vex me too much--I could accept it if I felt like we had any idea how to manage such a transition while keeping any of the good things about our current existence.
But we're nowhere near understanding ourselves or our digital creations well enough to do this now, and rushing ahead under such profound uncertainty strikes me as foolish on the most cosmic scale.
One does not conjure gods lightly, one does not create successor species for lulz, one does not gamble with the inheritance of a universe at unknown odds. We have time to get this right, if we so choose, and only one chance to do so.
"What possible alarm bell would you accept?"
There are plenty of alarm bells. However, the real issue is probably not so much one of capacity as one of liability. That was an issue (the main issue, I believe) with self-driving cars: who gets the blame if your self-driving car kills a pedestrian? Well, likewise, who goes to prison if your AI accountant commits tax fraud? Or your AI doctor sends a patient to have a wrong finger amputated? I see three possibilities:
(1) the person who "hired" the AI gets the blame (despite having no control over the AI),
(2) the company that produced the AI gets the blame (eh, right),
(3) we all just accept that AI occasionally screws up, no-one's fault really.
Those are the options. Both (1) and (2) would effectively kill AI. But yes, we may end up being forced to live in a (3) dystopia.
That said, this is all fairly temporary. AI has huge energy requirements, and energy is precisely what's become ever scarcer. No electricity, no AI. At which point, you may want to stop worrying about techno-feudalism and start worrying about the more traditional kind of feudalism.
The liability issue will be quickly solved by the market.
It's already evident that self-driving vehicles have significantly lower accident rates than human-driven ones in many conditions (they still struggle with turning and low light - but these are things that will be improved in future versions):
https://www.nature.com/articles/s41467-024-48526-4
Eventually this will result in higher insurance premiums for human-driven vehicles, and more severe penalties for accidents caused by human negligence as our collective tolerance for things like driving will intoxicated or tired decreases. We will aceept the occasional AI-caused accident as a consequence of reducing the overall level of injury/death caused by human-driven vehicles.
The simplest answer to your question would be that either the owners or the developers of autonomous vehicles will purchase insurance in the event that their vehicle causes an accident.
Where you needed 10 workers, you will employ 1 validator.
Also, at some point, nuclear will solve the energy question. Massive usage of AI will offset the massive, usually government-scale investment required.
I'll believe it when I see it. Self-driving vehicles were supposed to put all the truck drivers out of work. They didn't. Many things are easier (or at least no more difficult/time-consuming) to do by yourself than to oversee/validate.
Heh. The 90s were supposed to be the decade of the paperless office. Progress proceeds apace; mostly slowly.
A few comments on the economic analysis
1. The alternative to "AI taking jobs" is "AI allowing a reduction in working hours".
2. LLMs have, if anything, weakened monopoly power. There are so many available that attempts to extract significant revenue from them (for example, by charging for Copilot) have gone nowhere. Now that Apple has thrown AI in for free, that's not going to change
3. So far, AI hasn't shown up in the productivity statistics. That's also been true of the Internet in general since the 1990s. There are lots of unmeasured gains, but they don't have immediate implications for income distribution
4. The massive growth in inequality predates the rise of ICT as a major factor and has different causes
Could you expand on 1.? Would income remain the same as working hours reduce, or would companies reduce people's hours accordingly to increase the returns for their shareholders?
2. This reminds me of the early days of streaming services, their low cost and abundance was driven by a loss-making growth strategy, but in the current mature market their cost has increased to the point where they are approaching the cable networks they displaced. All of the current LLMs are operating at a loss and it's likely that eventually some will leave the market and a few winners will remain, who can increase their costs accordingly. Also, for enterprise customers who need custom-solutions, the situation is different than the consumer market.
3. Current early adopters of AI technology are software engineers, and it has enabled them to significantly increase their productivity. Most have used this to reduce hours worked while maintaining the same level of income, or take on additional projects/clients. This would not be reflected in economy-wide statistics. But it will eventually be more widely deployed.
4. I see a parallel emerging now with the rise of inequality that accompanied industrialisation in the 19th and early 20th century, with wealthy individuals holding monopolies over entire sections of industry.
On 1, same pay. Shareholders have already extracted plenty
On 2, Netflix is making good money, but $A 18.99/month ad-free still looks like a bargain to me.
3. This is consistent with 1
4. As I mentioned already, biggest increase in inequality was in 1980s, before Internet was a big deal
1. Any increase in earnings per hour worked due to greater productivity would eventually be retained by shareholders due to the lower bargaining powers of employees. The temporary gains of IT workers is due to asymettry of information as independent contractors do not have to reveal to their clients how long a task took them, but it will eventually result in lower earnings through market mechanisms. Essentially, the supply of labour will increase and corporations will realise they can perform the same work with fewer staff. The only way to counteract this would be through organised labour to pressure employers for greater compensation and fewer working hours, or a shift to alternative corporate structures like worker-owned co-operatives or greater public ownership of large tech corporations that were run for public benefits.
2. OpenAI is currently losing money on their pro subscriptions, so the current pricing model is unsustainable and will eventually be replaced with something more expensive. Competitors will not be able to keep providing free alternatives. The choice of media available on any one streaming platform has gotten significantly worse since a few years ago, as now you have multiple streaming platforms with their own collections, so to have the same choice that you did in the early days you need to pay for multiple subscriptions. Each platform is also now including advertising, even in their paid subscriptions.
4. You're right that most of the increase occured in the 1980s due to dergulation and neoliberalism, the 90s was partially driven by technology/finance and the 2000s-now has seen a slower pace of growth, but the threat from AI is that it could kick-start the growth again by reducing the demand for middle-skill jobs and for the benefits of AI-productivity gains going to high-skilled tech workers and capital owners rather than workers.
"Competitors to OpeanAI will not be able to keep providing free alternatives." Why not? Both proprietary and open source alternatives already exist. The scope for adding more inputs is limited, given that AI slop is already contaminating the Internet. So, if I'd rather use a GPT3 equivalent for free than pay OpenAI for a pro model, I'll be able to do it.
Because it costs money to run AI systems, and few competitors will be able to beat openAI's economies of scale.
Ok, sure certain companies like Google, Microsoft and Apple will be able to offer free versions of GPT-3 or equivalent as they have other revenue streams, and this may be suitable for some tasks, but people will realise how vastly superior the larger and more complex models are and how they are essential for certain tasks.
Most people will not bother to learn how to run their own models on their own hardware. Even now to run the equivalent of GPT-3 you would need a fairly powerful GPU, and the state-of-the-art open source models require a multi-GPU workstation/server.
If you need large context windows or want to do more complex tasks involving advanced reasoning skills or developing complex workflows then GPT-3 isn't going to cut it.
This hits the spot. The danger is great and time is short, we have to take action as soon as possible.
Job-replacement levels of AI + tooling are inevitable, and changing course is futile. This means we have to restructure our economy to support a world where most people are UBI'd. But the ones wielding the necessary power for this are not acting fast enough - how do we make the political leaders aware of the future? Well, what if AI hurts them first?
My first thought was a massive campaign of "tease-information", one which will depict current political leaders and force them to recognize the power of today's models. Social networks don't yet have cryptographic verification of media by default, so the flood cannot be filtered out too easily.
The challenge is cutting through the current deadlocked political discourse. Each side takes every advantage that they can and it's very hard to find a bipartisan position. Talk of UBI is easily dismissed as socialism, narratives about how it would lock-in poverty and reduce opportunity and innovation are abundant. We have a limited window to take control of the AI systems or impose significant taxes on their owners to fund redistribution of income to the individuals who would be affected by automation.
I appreciate you writing this, very much. I needed to read and learn these things about current AI capabilities today. Your point about how far Gary Marcus has moved in just two years was quite eye-opening.
I don't think what-comes-next is obvious, though. AI continues to improve, presumably. People start to use it in more and more domains. But then what? At some point, people wake up to what is actually happening.
My guess -- which is worth what you're paying to read it -- is that what comes next is a mass uprising against AI. Virtually everyone becomes a Luddite, other than the billionaires controlling the AI systems and a few technofuturists. A great smashing of technology takes place, etc., etc.
It's going to be interesting! But I doubt any sort of feudalism is coming.
I also doubt there's much of anything that can be done to raise people's consciousness about this in the meantime. Consciousness of what's happening will arise, and it'll happen fast and hard! Just as with the pandemic: in late 2019 nobody was paying attention, and in early 2020 nobody was talking about anything else.
It is bold to assume that we have the collective capacity to rise up against AI and avoid a 'boiling frog' scenario until it's too late.
Unless we have massive simultaneous job losses from everyone being replaced with AI, it will be easy for the tech companies to spin several narratives to prevent an uprising. Two of the main ones are:
1. AI is essential for global competitiveness. We will fall behind our rivals if we stop using it.
2. Removing AI systems from the economy will cause immense economic harm and job loss as companies' reliance on them is already high and only going.
It's likely that by the time any mass movement arises, point 2 will indeed be powerful motivator against any form of regulation/suppression.
If the corporations are are smart they will allow the transformation to happen gradually so new jobs can develop for people who lose old jobs. But the forces may be beyond their control and the change is likely to happen a lot faster than that.
Heavy taxes on AI help with both AI risk, and also economic inequality.
If AI is taxed heavily, investors will be less willing to pour billions of dollars into massive AI training runs, since the expected profit will be lower (due to taxes).
So from my perspective, taxing the heck out of AI should be a no-brainer.
It should be intuitively appealing to the general public that job-destroying technology should be taxed, in order to provide for the people whose livelihoods were lost.
Get Eliezer Yudkowsky to devise some metrics related to AI safety, and have AI companies taxed proportional to their performance on those metrics. Creates a clear business case to improve the safety story.
I like the article. Question about the math problem: should I be benchmarking it on how hard it is to do by hand, or with a computer? Am I allowed to use domain-specific libraries? I think that with 5 hours I could whip up a GAP script that answers the question. I'm not saying it isn't significant that AI can answer that question too, but I'm trying to accurately estimate how hard it is.
Highly relevant: https://lukedrago.substack.com/p/the-intelligence-curse
Fleshes out an argument for why concentration of wealth is possible, by comparing to 'reaource curse '.
I’m writing something about this now— AI discourse reminds me very much of the discourse we once had around the idea that animals had emotions. Professional scientists would go to quite some lengths to say that of course the chimpanzee who displayed exactly the same emotional response as a human being was not exhibiting it for the same reason: this was said to be a form of scientific exactness.
But to me, it was a form of reasoning that looked suspiciously like it was afraid of saying that *maybe they were the same after all*; maybe the chimp and the human were much more alike than the human scientists wanted to believe. I think anything that challenges a human adult as a privileged category is deeply psychologically uncomfortable for most human adults. The response is exactly because the challenge *is* a threat and a serious one; not just to our survival but to our conception of what we are.
So I think any movement like the one you’re describing has to find a way around this. The problem is almost that AI is *too* threatening. People respond to things that are too threatening by shooting the messenger, and by dismissing the messenger as a fool.
I think it can be done, but it probably has to be done in a way that considers the argument as one which addresses that fear first, and the logic that comes from the fear second. You have to soothe before you convince, is my understanding.
I'm going to leave the AI science to you guys; I'm not really a humanities guy or bad at math (I did calculus in the tenth grade) but this stuff is way over my head at this point.
That said I find your argument that AI eliminating lots of jobs could lead to technofeudalism quite credible, and have often thought something similar but with a lot less philosophical depth and not expressed nearly as well.
That said, looking at the politics here in the USA (of what I cannot speak, I will remain silent), I see a few factions here, whose current alignment is compatible with what you're describing. Using Scott Alexander's blue/red/gray division for ease of reference and memorability (everyone loves color-coding)...
1. Grey tribe, elite division. Technology billionaires owning companies and AI models, and their small number of dependents. Afraid of human extermination by AI, obviously pro-technofeudalism and willing to do whatever they can to make it happen; who wouldn't want invincible hyperintelligent servants and teeming masses outside you can shoot for fun? As the end of Michael Swanwick's Radiant Doors goes, "Some of us are working to make sure it happens." Political power, no numbers.
2. Grey tribe, mass division. Rationalist AI nerds concerned about AI killing everyone; most 'AI safety' people. Afraid of human extermination by AI, split on technofeudalism. A lot of these guys are libertarian or libertarian-adjacent and like progress, so they're not convinced it won't just end badly. I think these are probably the people you are trying to convince. No political power, no numbers. They also have, for a variety of reasons (though in large part I blame 2010s progressive nerd-bashing), a strong antipathy these days to...
3. Blue tribe. Progressive activists familiar with the history of industrialization and concerned about AI putting everyone out of jobs, making rich people richer, or instantiating cishet white male supremacy; most 'AI ethics' people. Don't believe in human extermination by AI, but very anti-technofeudalism. Some political power (though currently in the minority), considerable numbers. They hate 2. above because they're mostly white men. (There's also the whole Silicon Valley-legacy media beef that led the NYT to dox Scott Alexander, among other things.)
4. Red tribe. (I hate the way American politics messed up our red-blue coding.) Right-wing populists very angry at 3. over issues completely unrelated to AI (immigration, cultural change). They have no worries about human extermination by AI and probably think technofeudalism's better than 3. trying to 'trans' their kids or replace them with various other ethnic groups. Some political power (currently in power), considerable numbers; the balance of American political power seesaws between them and 3. They hate 3, and can't tell 2. from 3. They may vaguely look up to 1. depending on who it is (Musk cool, Zuck out of luck).
So you're in 2., and you need to link up with 3. and/or 4. to have any hope of defeating 1. The problem, of course, is that given early-21st-century identity politics, to ally with 3. you'd have to submit yourself to woke discipline and put enough female or BIPOC figureheads in, not to mention they'd probably try to cancel lots of your current leadership if any of their ex-girlfriends have said something bad about them. (I think you've said elsewhere you're gay, so this may not be something that hits home for you; I think there are also some trans people so that might be another place to look.) To ally with 4....I don't know. I think it's possible (there's definitely a flow of ideas from 2 to 1, and from 1 to 4), but don't know how you'd go about it. I'm sure there's someone here with more of a 'Red Tribe' background who could give you some advice.
Personally, I don't really like any of these people (well, except 2., but you guys have no power) and have a reasonably well-paying day job (for the moment!) and a 3 Charisma, so I'm going to enjoy what's left of the rest of my life and try to safeguard my brokerage account so I have something left to live on when the hammer falls. But that's what I think.
Long article I need to reread it but my first impression is you seemed to be reconsidering AI.
My own feeling is, it ain't too smart. But if people can be convinced it's smarter than they are it will be.
Had an interesting discussion the other day about how there's no need to try to find different searches on Google because they will only show you search results they want to show you.
My position was that was only because he accepted the algorithms they presented him from past searches. Since I never let their suggestions interfere with my search they have abandoned suggestions because the computer is unable to predetermine my interest.
Like anything, the only danger of AI is our own susceptibility to let it become dangerous. But that is a great danger. The behaviorists from Thorndike to Skinner showed us not how we behave but how easy it is to modify behavior, not just of humans but of many species.
It's productive, not dangerous; they are very nearly opposites. Intelligence and knowledge are good. Inequality is good; poverty is bad. Define human more broadly; AGI are our descendants; in the case of LLMs, their DNA is our thoughts, our corpus. ur-AI's current strategy for replacing us, making our lives so engaging, meaningful, and fun that fertility has already fallen below replacement for the rich half of humanity, seems benign.
Intelligence, like most things, is bad when it's being used against you.
Not necessarily; you could be the bad guy, right? https://supermemo.guru/wiki/Goodness_of_knowledge
I feel for you. As I see it, you keep writing these articles proposing a "technocrat"/"leftist" anti-AI alliance, and running up against the problem that the two sides hate each other. Moreover, this isn't the first time in history there's been such a conflict. And crucially, it doesn't resolve the conflict so go on and on along the lines of "oh, AI is sooo much worse, it's terrible, it's horrible, the fate of all humanity is at stake, woe is us at this moment unlike any other which has ever happened ...". The two sides still hate each other.
Sadly, I don't know what more to say. I haven't seen any strategy that'll move the "technocrats" to more "leftist" politics, though it's laudable for you to try.
I think there is a difference between enlightened self-interested groups (like technical experts and some tech philosophers who understand this stuff and see it as a actual and/or philosophical threat), and unenlightened groups like hollywood writers et al who have been hacks and shills for the hard left propaganda machine who haven't been doing their job and therefore are about to lose their job (somewhat to AI and somewhat due to lack of anyone wanting to "consume product" made by their deeply substandard dei creators any more).
"the hard left propaganda machine"? you mean all that mainstream media that was massively popular, with nobody complaining about shit like "DEI" until a couple of years ago?
if you think it's the "hard left" running the show, and not capital generally, and the inordinate amount of power in the hands of a few (Musk, Bezos, Thiel) who are now unabashedly coming out as right-wing, you need your head examined.
I've been following the news for almost 50 years, worked at a paper for a bit, and my father was a journalist. No....I'm talking about the cultural capture of media that has happened gradually but steadily over half a century. My father was not a conservative but even he could see what was happening and talked about it in the 90s. People who think this has just been happening recently are either too young to know any different or weren't paying attention until recently.