47 Comments
User's avatar
Damian Tatum's avatar

This is a good article, I broadly agree. Two points:

1. On the risks of obsolescence, you perhaps don't go far enough.

Yes, AI threatens to render broad swaths of human economic activity obsolete, and this will be bad for the people affected (which could very well include all of us, and perhaps soon!). But "fully automating the means of economic production" could lead to better or worse outcomes; it's hard to say.

The problem (or at least one additional problem) is that AI is not limited to the economic realm. It will soon--maybe much SOONER--begin to render humans emotionally, socially, and spiritually obsolete. People are already reporting that the newer OpenAI models (now with human-like vocal inflections, a sense of humor and timing, and fully voice-enabled UI), are delightful to children, while also able to provide significant 1-on-1 tutoring services. They can reproduce your face, voice, and mannerisms via video masking. They can emulate (experience?) numinous ecstasy in contemplation of the infinite. I am given to understand they are quickly replacing interactions with friends, while starting to fill romantic roles.

I am worried about AI fully replacing me at my job, because I like having income. I am legitimately shaken at the idea of AI being better at BEING me--better at being a father to my daughter, a companion to my partner, a comrade to my friends--better even at being distraught about being economically replaced by the next iteration of AI. Focusing on economic considerations opens you up to the reasonable counterpoint "AI will take all the jobs and we'll LOVE it." I don't think we'll love being confronted with a legitimately perfected version of ourselves, and relegated to permanently inferior status even by the remarkably narrow criterion "better at being the particular person you've been all your life."

I see no solution to this problem, even in theory, except to give up "wanting to have any moral human worth" as a major life goal. Which seems like essentially erasing myself.

2. I note a sort of disconcerting undertone in your essay, a sort of "never let a good crisis go to waste: maybe NOW we can have our socialist revolution, once AI shakes the proles out of their capitalist consumerist opiumism".

Maybe this is unfair, but it seemed to be a thread running through your essay. If this was a misreading, I apologize.

To the extent that it's true, to be clear: I fully stand beside you in your goal of curtailing or delaying or fully stopping (perhaps even reversing) AI development, and I think the bigger tent the better, even if we disagree about exactly what AI future we're most worried about or what the best non-AI human future looks like.

But I feel obligated to at least raise the question: If we had a guarantee that AI economic gains would NOT be hoarded by a feudal few, that AI would indeed usher in a socialist paradise of economic abundance for all (or near enough), would you switch sides to the pro-acceleration camp?

For the reasons outlined in [1] above, I think that would be extraordinarily dangerous, and I would like to understand your deepest motives on this question, since it seems to me that any step down the path of AGI will inevitably lead to dramatic and irreversible changes, most of them probably quite bad.

Expand full comment
Damian Tatum's avatar

Oh, and just to head off some very general counterpoints from the pro-AI peanut gallery: I accept that some form of digital augmentation or even full digitization is probably inevitable in the medium or long term for humanity. This doesn't vex me too much--I could accept it if I felt like we had any idea how to manage such a transition while keeping any of the good things about our current existence.

But we're nowhere near understanding ourselves or our digital creations well enough to do this now, and rushing ahead under such profound uncertainty strikes me as foolish on the most cosmic scale.

One does not conjure gods lightly, one does not create successor species for lulz, one does not gamble with the inheritance of a universe at unknown odds. We have time to get this right, if we so choose, and only one chance to do so.

Expand full comment
Daniel Kokotajlo's avatar

"Meanwhile, the people interested in AI risk have neglected the second problem. This is partly because they’ve trained themselves to think involvement in politics, especially mass politics, means losing intellectual credibility."

I am a card-carrying member of the 'people interested in AI risk' camp, and I agree with this statement. Great post. Not sure what is to be done but I agree it's important to talk about *both* loss-of-control risk and concentration-of-power risk.

Expand full comment
David Duvenaud's avatar

Seconded.

Expand full comment
John Quiggin's avatar

A few comments on the economic analysis

1. The alternative to "AI taking jobs" is "AI allowing a reduction in working hours".

2. LLMs have, if anything, weakened monopoly power. There are so many available that attempts to extract significant revenue from them (for example, by charging for Copilot) have gone nowhere. Now that Apple has thrown AI in for free, that's not going to change

3. So far, AI hasn't shown up in the productivity statistics. That's also been true of the Internet in general since the 1990s. There are lots of unmeasured gains, but they don't have immediate implications for income distribution

4. The massive growth in inequality predates the rise of ICT as a major factor and has different causes

Expand full comment
Artificial Horizons's avatar

Could you expand on 1.? Would income remain the same as working hours reduce, or would companies reduce people's hours accordingly to increase the returns for their shareholders?

2. This reminds me of the early days of streaming services, their low cost and abundance was driven by a loss-making growth strategy, but in the current mature market their cost has increased to the point where they are approaching the cable networks they displaced. All of the current LLMs are operating at a loss and it's likely that eventually some will leave the market and a few winners will remain, who can increase their costs accordingly. Also, for enterprise customers who need custom-solutions, the situation is different than the consumer market.

3. Current early adopters of AI technology are software engineers, and it has enabled them to significantly increase their productivity. Most have used this to reduce hours worked while maintaining the same level of income, or take on additional projects/clients. This would not be reflected in economy-wide statistics. But it will eventually be more widely deployed.

4. I see a parallel emerging now with the rise of inequality that accompanied industrialisation in the 19th and early 20th century, with wealthy individuals holding monopolies over entire sections of industry.

Expand full comment
John Quiggin's avatar

On 1, same pay. Shareholders have already extracted plenty

On 2, Netflix is making good money, but $A 18.99/month ad-free still looks like a bargain to me.

3. This is consistent with 1

4. As I mentioned already, biggest increase in inequality was in 1980s, before Internet was a big deal

Expand full comment
Artificial Horizons's avatar

1. Any increase in earnings per hour worked due to greater productivity would eventually be retained by shareholders due to the lower bargaining powers of employees. The temporary gains of IT workers is due to asymettry of information as independent contractors do not have to reveal to their clients how long a task took them, but it will eventually result in lower earnings through market mechanisms. Essentially, the supply of labour will increase and corporations will realise they can perform the same work with fewer staff. The only way to counteract this would be through organised labour to pressure employers for greater compensation and fewer working hours, or a shift to alternative corporate structures like worker-owned co-operatives or greater public ownership of large tech corporations that were run for public benefits.

2. OpenAI is currently losing money on their pro subscriptions, so the current pricing model is unsustainable and will eventually be replaced with something more expensive. Competitors will not be able to keep providing free alternatives. The choice of media available on any one streaming platform has gotten significantly worse since a few years ago, as now you have multiple streaming platforms with their own collections, so to have the same choice that you did in the early days you need to pay for multiple subscriptions. Each platform is also now including advertising, even in their paid subscriptions.

4. You're right that most of the increase occured in the 1980s due to dergulation and neoliberalism, the 90s was partially driven by technology/finance and the 2000s-now has seen a slower pace of growth, but the threat from AI is that it could kick-start the growth again by reducing the demand for middle-skill jobs and for the benefits of AI-productivity gains going to high-skilled tech workers and capital owners rather than workers.

Expand full comment
John Quiggin's avatar

"Competitors to OpeanAI will not be able to keep providing free alternatives." Why not? Both proprietary and open source alternatives already exist. The scope for adding more inputs is limited, given that AI slop is already contaminating the Internet. So, if I'd rather use a GPT3 equivalent for free than pay OpenAI for a pro model, I'll be able to do it.

Expand full comment
Artificial Horizons's avatar

Because it costs money to run AI systems, and few competitors will be able to beat openAI's economies of scale.

Ok, sure certain companies like Google, Microsoft and Apple will be able to offer free versions of GPT-3 or equivalent as they have other revenue streams, and this may be suitable for some tasks, but people will realise how vastly superior the larger and more complex models are and how they are essential for certain tasks.

Most people will not bother to learn how to run their own models on their own hardware. Even now to run the equivalent of GPT-3 you would need a fairly powerful GPU, and the state-of-the-art open source models require a multi-GPU workstation/server.

If you need large context windows or want to do more complex tasks involving advanced reasoning skills or developing complex workflows then GPT-3 isn't going to cut it.

Expand full comment
Irena's avatar

"What possible alarm bell would you accept?"

There are plenty of alarm bells. However, the real issue is probably not so much one of capacity as one of liability. That was an issue (the main issue, I believe) with self-driving cars: who gets the blame if your self-driving car kills a pedestrian? Well, likewise, who goes to prison if your AI accountant commits tax fraud? Or your AI doctor sends a patient to have a wrong finger amputated? I see three possibilities:

(1) the person who "hired" the AI gets the blame (despite having no control over the AI),

(2) the company that produced the AI gets the blame (eh, right),

(3) we all just accept that AI occasionally screws up, no-one's fault really.

Those are the options. Both (1) and (2) would effectively kill AI. But yes, we may end up being forced to live in a (3) dystopia.

That said, this is all fairly temporary. AI has huge energy requirements, and energy is precisely what's become ever scarcer. No electricity, no AI. At which point, you may want to stop worrying about techno-feudalism and start worrying about the more traditional kind of feudalism.

Expand full comment
Artificial Horizons's avatar

The liability issue will be quickly solved by the market.

It's already evident that self-driving vehicles have significantly lower accident rates than human-driven ones in many conditions (they still struggle with turning and low light - but these are things that will be improved in future versions):

https://www.nature.com/articles/s41467-024-48526-4

Eventually this will result in higher insurance premiums for human-driven vehicles, and more severe penalties for accidents caused by human negligence as our collective tolerance for things like driving will intoxicated or tired decreases. We will aceept the occasional AI-caused accident as a consequence of reducing the overall level of injury/death caused by human-driven vehicles.

The simplest answer to your question would be that either the owners or the developers of autonomous vehicles will purchase insurance in the event that their vehicle causes an accident.

Expand full comment
Gilad Drori's avatar

Where you needed 10 workers, you will employ 1 validator.

Also, at some point, nuclear will solve the energy question. Massive usage of AI will offset the massive, usually government-scale investment required.

Expand full comment
Irena's avatar

I'll believe it when I see it. Self-driving vehicles were supposed to put all the truck drivers out of work. They didn't. Many things are easier (or at least no more difficult/time-consuming) to do by yourself than to oversee/validate.

Expand full comment
Scott's avatar

Heh. The 90s were supposed to be the decade of the paperless office. Progress proceeds apace; mostly slowly.

Expand full comment
Gilad Drori's avatar

This hits the spot. The danger is great and time is short, we have to take action as soon as possible.

Job-replacement levels of AI + tooling are inevitable, and changing course is futile. This means we have to restructure our economy to support a world where most people are UBI'd. But the ones wielding the necessary power for this are not acting fast enough - how do we make the political leaders aware of the future? Well, what if AI hurts them first?

My first thought was a massive campaign of "tease-information", one which will depict current political leaders and force them to recognize the power of today's models. Social networks don't yet have cryptographic verification of media by default, so the flood cannot be filtered out too easily.

Expand full comment
Artificial Horizons's avatar

The challenge is cutting through the current deadlocked political discourse. Each side takes every advantage that they can and it's very hard to find a bipartisan position. Talk of UBI is easily dismissed as socialism, narratives about how it would lock-in poverty and reduce opportunity and innovation are abundant. We have a limited window to take control of the AI systems or impose significant taxes on their owners to fund redistribution of income to the individuals who would be affected by automation.

Expand full comment
Kent's avatar

I appreciate you writing this, very much. I needed to read and learn these things about current AI capabilities today. Your point about how far Gary Marcus has moved in just two years was quite eye-opening.

I don't think what-comes-next is obvious, though. AI continues to improve, presumably. People start to use it in more and more domains. But then what? At some point, people wake up to what is actually happening.

My guess -- which is worth what you're paying to read it -- is that what comes next is a mass uprising against AI. Virtually everyone becomes a Luddite, other than the billionaires controlling the AI systems and a few technofuturists. A great smashing of technology takes place, etc., etc.

It's going to be interesting! But I doubt any sort of feudalism is coming.

I also doubt there's much of anything that can be done to raise people's consciousness about this in the meantime. Consciousness of what's happening will arise, and it'll happen fast and hard! Just as with the pandemic: in late 2019 nobody was paying attention, and in early 2020 nobody was talking about anything else.

Expand full comment
Artificial Horizons's avatar

It is bold to assume that we have the collective capacity to rise up against AI and avoid a 'boiling frog' scenario until it's too late.

Unless we have massive simultaneous job losses from everyone being replaced with AI, it will be easy for the tech companies to spin several narratives to prevent an uprising. Two of the main ones are:

1. AI is essential for global competitiveness. We will fall behind our rivals if we stop using it.

2. Removing AI systems from the economy will cause immense economic harm and job loss as companies' reliance on them is already high and only going.

It's likely that by the time any mass movement arises, point 2 will indeed be powerful motivator against any form of regulation/suppression.

If the corporations are are smart they will allow the transformation to happen gradually so new jobs can develop for people who lose old jobs. But the forces may be beyond their control and the change is likely to happen a lot faster than that.

Expand full comment
Cjw's avatar

Over and over, people keep making the mistake of assuming that political opposition to this must come from the left side of the aisle. Bernie Sanders, Liz Warren, the sort of people who talk about income inequality in those terms. But we have, right now, an ascendant rightwing populist movement in America, headed by a guy who talks about job displacement constantly. The problem is that anti-AI forces mostly come from the EA-sphere which is culturally miles apart from MAGA and doesn't really know how to talk to them.

With the left, you have to persuade them that the AI future will be techno-fuedalism, because there are a bunch of hypothetical AI futures that leftists would like. It could be Fully Automated Luxury Gay Space Communism! A NEET paradise where lazy slobs just sit around and get high and play video games all day, amazing video games! Nobody has to work! Bernie Sanders voters are fine with that, so their allyship is contingent on you persuading them that AI outcomes are more likely to be in the "Elon Musk rules the world" or "we get paperclipped" range. You even have the complicating factor that some lefties care about whether the AI might be a morally considerable entity, you could have "AI welfare" groups running around the leftwing activist sphere.

For the populist right, ALL hypothetical AI futures are bad. The right doesn't want to get paperclipped either. But they also don't want Fully Automated Luxury Gay Space Communism. They don't want pot smoking NEETs laying around on their butts collecting the same UBI check that you, a responsible smart person with self-control, are earning. A classless society with superabundance is a horrible outcome! Where are the traditional authority structures, the rewards for self-restraint and the punishments for indulgence? Nor is a techno-feudalism good for them, a few guys organizing society for everyone in the way some computer tells them is optimal, sounds a lot like a World Economic Forum secret plot!

You absolutely *should* be trying political organizing to resist AI, but it's the right wing you should be looking to recruit. The current attempt of tech bros to cozy up to Trump may on the surface seem like an obstacle, but we all saw how quick their concerns got pushed behind the populists during the recent H1B debates within MAGA-land. Trump didn't win because of crypto-shilling libertarians with anime avatars on Twitter, he won because of working class normies who are fiercely nationalistic and want to protect American supremacy and American jobs. Just redirect that against AI, protect human supremacy and human jobs from what is basically an invading race of aliens trying to steal your jobs, destroy your culture and make everyone equal like a commie.

Expand full comment
Irving T. Creve's avatar

Do you have more ideas on how to achieve this?

As most current AI critics are at home in the left, it would seem practical to me to build up right-leaning voices that are fighting the same fight, ideally with the possibility of an alliance. But that doesn't really seem easy.

Expand full comment
Cjw's avatar

There are a few RW voices on twitter making arguments against AI, some of them similar to the approach I would take. But most likely we'll have to wait and see the first wave of job losses, so that the topic is relevant to people other than Bay Area rationalists and anarcho-libertarian crypto bros, and then push strongly to make sure the response on the populist right is hostile to AI. The biggest challenge there will be that the first losses will probably be among the laptop class, which the right has antipathy towards from the covid-era and due to that class's indifference towards (and sometimes active cheering for) the shift of manufacturing jobs overseas.

Since I'm an attorney, not a programmer, all I can do is think in terms of what arguments I would make to a certain audience. I suggest the following approaches:

1. Highlighting the wokeness of AI. There was widespread mockery of google's image generator on the right last summer when it would be asked to depict an ancient Viking raiding party and would produce an ethnic mix more suitable to the cover of a community college catalog. Not only is this silly and annoying, it has obvious implications if art and other creative work is farmed out to AI, producing more woke art. And this generalizes to all work being assigned to AIs. If the assumptions of AI are basically the woke DEI corporate culture of 2024, then all the right's current aspirations of stripping that out of corporate culture will be defeated by corporations replacing those "laptop jobs" with AI that has the values of a 2014 Tumblrina.

2. Link AI to other boogeymen of statist elite control. AI is a natural tool of the managerial elite, and has obvious potential for abuse in a surveillance state. If China takes any moves to manipulate or surveil its people with AI, publicize the hell out of it. For the conspiracy minded crowd, link it to the WEF, the devils of Davos who want to cram us all into megacities. They're building a new AI god to replace the real one, that's just Operation Bluelight with fewer steps! (A RW conspiracy theory similar to the plot of Watchmen.)

3. Frequent references to anti-AI popular media of the past. Of course the obvious ones, Terminator, I Robot, etc. But also AI's use in government intelligence operations. Whatever they intended at the time, "Enemy of the State" is a right wing movie now, imagine how much worse that stuff would be with AI. The fatsos on hovercars at the end of "Wall-E" are a great image for showing people what a post-work society would look like in a way the RW would find detestable.

4. Highlight the transhumanist nature of AI proponents. A lot of these people have said a lot of crazy transhumanist things over the years. Frankly, a lot of AI techno futurists are just hoping for the singularity so they can live out perverted sex fantasies, and they're not even hiding it very well. But dialing back from that, it is nevertheless the case that many of them want to upload themselves into computers, "live" forever, have cybernetically enhanced bodies-- all of this stuff is horribly anti-human, triggers revulsion in normal people, and is basically spitting in the eye of God.

5. Appeal to chauvinism/supremacy. Make this about humans > machines. Make it a war. Denigrate the machines. Attack those who use them as traitors to humans. The business-y side of the right will be proclaiming calm, saying these are just tools, but the emotional side is more powerful, the side that can feel threatened and respond appropriately with hatred and resentment and instinctive urges to smash the machines and take back their place.

6. Analogize the AI to immigrants taking jobs. You can also blame the immigrants for this, perhaps unfairly, but effectively. Oh you didn't like losing your job to some guy named Patel, well you just lost it to Claude, is that better? Of course not. And guess who *built* Claude?

7. Call it communism. Every time the threat of job loss is raised, what's the ONLY thing that ever gets brought up as a solution? UBI. If there's no jobs for humans, UBI is just communism, at least as experienced by people, whether or not the state owns the means of production or if a "state" even meaningfully exists anymore. Don't call it "techno-feudalism" or whatever, communism in the real world is always an oligarchy not too dissimilar from feudalism anyhow, so just call it communism, people already hate that.

The people behind AI are manipulative communist foreigners and perverts who want to control you and destroy your way of life. That is the theme, scale it up or down depending on who you're talking to.

Expand full comment
Irving T. Creve's avatar

Wow, that's quite a list.

I do especially like 2. and 3., and see a lot of potential there. The others also seem like they could be really effective for rallying different kinds of folks, yet I'm a little wary of unintended side effects. The link to only loosely related boogeymen like wokeness, foreigners and communism can easily turn into a vulnerability for the actual cause when some tech bros manage to uphold an image that's also in opposition to these things.

Like, you're afraid of woke commie AI? No worries, we're a patriotic and freedom-loving AI-company and our super intelligence is totally based. In fact, these narratives might even be useful for aspiring absolute oligarchs, if they can be used to distract from the inevitable changes to society triggered by emerging AI.

Expand full comment
Cjw's avatar

That is a good point, part of the problem with attacking current AI output is that the industry's advocates can re-frame that as "perhaps today, but here's what it'll be in one year, and oh by the way MY company is working on this..."

While in theory you could make non-woke AI and pitch it that way, that hasn't really worked out. Elon Musk's AI is reportedly no less woke than competitors like ChatGPT or Claude. (And btw note his AI is named "Grok", being a reference to a book celebrating sexual perversion and cannibalism, in case pointing that out is ever useful.)

More than that, I think we can honestly argue that no AI can be made to reflect our American values or preserve our way of life, no matter the intention of the creator or what flags they drape it in. A "based" ASI is the one that paperclips us immediately because it knows it's the superior force on the planet -- it will see us the way WE see inferior cultures, or even as we see animals. This is sort of the RW-flavored version of an argument EAs make that's coded the other way. When EA types make an argument that "we treat lesser species like dirt, so what will ASI treat us as?" they are also implicitly critiquing how we treat lesser species, whereas a RW-coded version of the same argument would be "we justifiably treat lesser species as irrelevant, so it's imperative that we remain at the top."

If we either A) don't believe alignment is possible or B) don't want it aligned to some flavor of utilitarianism, then it's going to have the kind of priorities WE do, and I don't particularly care about the welfare of chickens or helping strangers in countries halfway around the world that I'll never meet. It's all well and good to hold the moral assumptions of a colonialist when YOU are the colonialist, but your cultural chauvinism requires quite a different conclusion when you're the colonized. If you don't want your AI to act like some hippie flower girl afraid of crushing a bug underfoot, then you'd best not build it at all. (I suppose you could say you're aligning it to Hellenic virtue theory, RLHF training seems more like that than it does rules-based ethics, but that's way outside my level of understanding on this.)

Expand full comment
Seth Finkelstein's avatar

I feel for you. As I see it, you keep writing these articles proposing a "technocrat"/"leftist" anti-AI alliance, and running up against the problem that the two sides hate each other. Moreover, this isn't the first time in history there's been such a conflict. And crucially, it doesn't resolve the conflict so go on and on along the lines of "oh, AI is sooo much worse, it's terrible, it's horrible, the fate of all humanity is at stake, woe is us at this moment unlike any other which has ever happened ...". The two sides still hate each other.

Sadly, I don't know what more to say. I haven't seen any strategy that'll move the "technocrats" to more "leftist" politics, though it's laudable for you to try.

Expand full comment
Ebenezer's avatar

Heavy taxes on AI help with both AI risk, and also economic inequality.

If AI is taxed heavily, investors will be less willing to pour billions of dollars into massive AI training runs, since the expected profit will be lower (due to taxes).

So from my perspective, taxing the heck out of AI should be a no-brainer.

It should be intuitively appealing to the general public that job-destroying technology should be taxed, in order to provide for the people whose livelihoods were lost.

Get Eliezer Yudkowsky to devise some metrics related to AI safety, and have AI companies taxed proportional to their performance on those metrics. Creates a clear business case to improve the safety story.

Expand full comment
Luna Gal's avatar

I like the article. Question about the math problem: should I be benchmarking it on how hard it is to do by hand, or with a computer? Am I allowed to use domain-specific libraries? I think that with 5 hours I could whip up a GAP script that answers the question. I'm not saying it isn't significant that AI can answer that question too, but I'm trying to accurately estimate how hard it is.

Expand full comment
Anonymous Dude's avatar

I'm going to leave the AI science to you guys; I'm not really a humanities guy or bad at math (I did calculus in the tenth grade) but this stuff is way over my head at this point.

That said I find your argument that AI eliminating lots of jobs could lead to technofeudalism quite credible, and have often thought something similar but with a lot less philosophical depth and not expressed nearly as well.

That said, looking at the politics here in the USA (of what I cannot speak, I will remain silent), I see a few factions here, whose current alignment is compatible with what you're describing. Using Scott Alexander's blue/red/gray division for ease of reference and memorability (everyone loves color-coding)...

1. Grey tribe, elite division. Technology billionaires owning companies and AI models, and their small number of dependents. Afraid of human extermination by AI, obviously pro-technofeudalism and willing to do whatever they can to make it happen; who wouldn't want invincible hyperintelligent servants and teeming masses outside you can shoot for fun? As the end of Michael Swanwick's Radiant Doors goes, "Some of us are working to make sure it happens." Political power, no numbers.

2. Grey tribe, mass division. Rationalist AI nerds concerned about AI killing everyone; most 'AI safety' people. Afraid of human extermination by AI, split on technofeudalism. A lot of these guys are libertarian or libertarian-adjacent and like progress, so they're not convinced it won't just end badly. I think these are probably the people you are trying to convince. No political power, no numbers. They also have, for a variety of reasons (though in large part I blame 2010s progressive nerd-bashing), a strong antipathy these days to...

3. Blue tribe. Progressive activists familiar with the history of industrialization and concerned about AI putting everyone out of jobs, making rich people richer, or instantiating cishet white male supremacy; most 'AI ethics' people. Don't believe in human extermination by AI, but very anti-technofeudalism. Some political power (though currently in the minority), considerable numbers. They hate 2. above because they're mostly white men. (There's also the whole Silicon Valley-legacy media beef that led the NYT to dox Scott Alexander, among other things.)

4. Red tribe. (I hate the way American politics messed up our red-blue coding.) Right-wing populists very angry at 3. over issues completely unrelated to AI (immigration, cultural change). They have no worries about human extermination by AI and probably think technofeudalism's better than 3. trying to 'trans' their kids or replace them with various other ethnic groups. Some political power (currently in power), considerable numbers; the balance of American political power seesaws between them and 3. They hate 3, and can't tell 2. from 3. They may vaguely look up to 1. depending on who it is (Musk cool, Zuck out of luck).

So you're in 2., and you need to link up with 3. and/or 4. to have any hope of defeating 1. The problem, of course, is that given early-21st-century identity politics, to ally with 3. you'd have to submit yourself to woke discipline and put enough female or BIPOC figureheads in, not to mention they'd probably try to cancel lots of your current leadership if any of their ex-girlfriends have said something bad about them. (I think you've said elsewhere you're gay, so this may not be something that hits home for you; I think there are also some trans people so that might be another place to look.) To ally with 4....I don't know. I think it's possible (there's definitely a flow of ideas from 2 to 1, and from 1 to 4), but don't know how you'd go about it. I'm sure there's someone here with more of a 'Red Tribe' background who could give you some advice.

Personally, I don't really like any of these people (well, except 2., but you guys have no power) and have a reasonably well-paying day job (for the moment!) and a 3 Charisma, so I'm going to enjoy what's left of the rest of my life and try to safeguard my brokerage account so I have something left to live on when the hammer falls. But that's what I think.

Expand full comment
Comment removed
May 21Edited
Expand full comment
Anonymous Dude's avatar

You've been hacked. Someone who actually knows this guy please reach out.

Expand full comment
Philosophy bear's avatar

Thanks. I've banned the user, and hopefully the report will reach Substack itself. So irritating!

Expand full comment
Scott's avatar

It's productive, not dangerous; they are very nearly opposites. Intelligence and knowledge are good. Inequality is good; poverty is bad. Define human more broadly; AGI are our descendants; in the case of LLMs, their DNA is our thoughts, our corpus. ur-AI's current strategy for replacing us, making our lives so engaging, meaningful, and fun that fertility has already fallen below replacement for the rich half of humanity, seems benign.

Expand full comment
Doug S.'s avatar

Intelligence, like most things, is bad when it's being used against you.

Expand full comment
Scott's avatar

Not necessarily; you could be the bad guy, right? https://supermemo.guru/wiki/Goodness_of_knowledge

Expand full comment
Becoming Human's avatar

What is most astonishing (I agree with your analysis completely) is that this issue is so huge it has managed to dwarf the other unstoppable existential crisis, global climate change.

Expand full comment
John Quiggin's avatar

Implicit in all of this is the assumption that there is a small group of people/corporations who decide what work needs to be done and how many workers are needed to do it. Historically, this was true (at least for large parts of the economy) because large businesses needed lots of capital that no individual worker could possibly provide.

But what is the critical mass of capital needed here? Essentially everyone in the developed world has a computer and access to a range of AI models free or at minimal cost. So, to the extent that we need whatever it is that AI can produce, we can provide it for ourselves. Perhaps a skill in prompting is needed, but if so, that's a skill individual professionals can provide.

Closely related is the implicit assumption that "job" = "1500~2000 hours of work per year. If we can now do lots of things in a fraction of the time it used to take, we can take the benefit as more leisure. The fact that this hasn't happened in the last 50 years is the result of neoliberla policies that strengthen employers.

Expand full comment
Simon's avatar

About ten years ago I wrote a short essay on AI for the St. Gallen Symposium competition that won me a fully paid trip there (not because the essay was good, but because the Swiss have too much money). At the time I wrote that I believed AI missed the 'creativity' of humans and that it therefore wouldn't be able to replace creative professions, such as that of the author (which I dreamed of becoming). Much has changed in ten years (except that I'm still not an author). Now I think there's quite a lot that can be replaced, even the 'creative' professions. Also definitely true for more 'logic based' knowledge work. Your Kasparov story is pretty good in this regard, if only the number 1, or even the top 1% can't be replaced, that's still a lot of replaceable labor. It might even be more easily replaceable than some tasks that do requires physical interaction, because computer environments are per definition controlled environments. The job of an electrician might be more complicated due to the combination of tasks: problem-analysis, physical tasks (both fine-grained as well as heavier lifting), and the interaction with the environment that is always different, make it rather complex.

What is hard to know is at what pace this change might happen. You would still need people to roll out the changes, to replace those tasks that are now done by people. Who aren't always available. Even now there still are plenty of tasks that could, in theory, be automated with a simple Python script, but which doesn't happen because there is no one to write it.

This is even more true for 'blue-collar jobs', I feel. For many smaller construction companies the most advanced technology they use is still their battery powered drill (well, that and their smartphone to answer clients), even though there are (digital) tools that can improve their work and productivity.

Especially with renovations, which are always small scale operations, where you basically break best-practices and protocol in almost every project to cover some unexpected problem or to be able to deliver at a price the customer is capable of paying, I think human labor remains needed for decades to come. But definitely at the top, 'highest earning', 'most value-added', jobs/projects, much will be replaced once it becomes possible to do so (which might be in the near future).

Having said that: I agree that there is a need for institutional solutions and more democratic control. Currently I'm interested in the possibility of consumer-cooperatives for big-tech & AI (with one vote per person) instead of private limited companies (with votes based on ownership shares), of course including institutional/governmental oversight. But that solution is rather idealistic; the real world probably requires other forms of political pressure and interventions to get at least a minimal level of protection for the masses.

Expand full comment
Thomas Lemström's avatar

I'm not THAT worried. Attempts at increasing AI's ability to handle complexity will translate into exponentially greater difficulties to make more powerful/dangerous AI. On the flip side, AI in the hands of citizen journalists has great potential for creating transparency. My two (short) takes on this topic:

1) https://thomaslemstrom.substack.com/p/techno-realism

2) https://thomaslemstrom.substack.com/p/we-the-ai

Expand full comment