The idea of a race to the singularity is well on the way to becoming part of ruling class US ideology
With Larry Summers and probably Trump buying in, this is becoming part of the ruling class mainstream
The singularity is something only Silicon Valley nerds who have done a few too many microdoses talk about right? Nope. I believe that a technological singularity is possible, and is something we should be worried about. However, regardless of whether or not you agree, and even if I am wrong, the idea of a singularity is, sadly, well on its way to becoming your problem.
Larry Summers is the consummate ruling class insider. Former secretary of the treasury and president of Harvard. Scandal-proof (or at least scandal-resistant), a high priest of the ruling clique of America. He’s also now on the board of OpenAI along with a number of other well-lauded individuals, but I would draw particular attention to Paul M. Nakasone, former head of the NSA (National Security Agency).
Larry Summers said in an interview with Bloomberg:
1. Artificial intelligence is a technology of ‘transcendent’ possibilities because it has the potential to trigger a cascade of self-improvement. This cascade of self-improvement makes it more significant than the technological changes of his lifetime.
2. These changes may lead to ‘last mile problems’ and it is vital that AI not be left to AI developers
3. However, what for Summers seemed to be the critical point is that in the race for this transcendent technology- we must not surrender our competitive edge to America’s enemies.
In summary: A singularity is possible and would cause tremendous changes. There are some risks involved in this, and it is appropriate society and government play some (underdefined) role in the process. But Summers's main concern in the context of the interview is that wrongheaded regulation might allow the bad guys to get to the ‘transcendent’ possibilities of AI- a self-improvement loop- before we do.
In short, he’s endorsing a race for the singularity.
If Larry Summers, not known for previously talking about the possibility of an AI-based end of history is talking about a singularity race we can be sure that these views are making inroads in the American ruling class. The only question is how far these inroads will go. Influential actors will at least try to make racing for the singularity an important idea in structuring US policy- although policymakers may ultimately reject them. I concede, of course, that there’s a bias and a half here- Summers is on the OpenAI board- still, the main point is he thinks this rhetoric flies.
What about the rightwing of politics? Trump recently claimed that the US must, due to AI, double its electricity consumption, and is in a race with Japan and China to reach a possible Golden age enabled by AI:
https://twitter.com/i/status/1814158633095111082
Trump’s comments are characteristically vaguer than Prof Summers's, but the most natural interpretation is something like singularity chasing. The scale of AI projects that would need the equivalent of the United States’ entire power output is incomprehensible, and almost nothing but a grab at the singularity would justify anything like a race with China and Japan to massively increase power output in the next 20 years. AI uses a lot of energy to be sure, but the use of current models is nothing compared to that.
If racing for the singularity comes to guide policy then whether you believe in the possibility of a soon(ish) technological singularity or not, and whether or not a singularity is possible, the race can affect you.
If you are worried- like me- about prematurely built Artificial Superintelligence killing everyone, then the reasons to be worried about the race are obvious. The race will turn the ruling class against AI safety measures and may lead to the premature creation of things we cannot control.
On the other hand, suppose you believe the narrative that all talk of the singularity is ideological, part of the TESCRAL complex. This complex serves only the cover up the current dangers of AI, especially racism, excess energy consumption, disinformation, job loss etc. If you believe this, then you should be really concerned about this race-for-the-singularity ideology gripping the American ruling class. It will help shield AI from the forms of scrutiny it needs as a dangerous technology prone to errors, mistakes, and social disruption.
Now you might say “Ah, well, but they probably don’t really believe it do they? It’s just ruling class verbiage- a race for the singularity makes good theatre”. To which I would reply:
Don’t be so hasty!
An ideology pushed by broad sections of the ruling class can structure behavior, whether it is believed or not.
If the singularity is on the horizon, this could kill us all. If the singularity is not on the horizon, it will justify various unethical and destructive practices using AI, and perhaps increase the probability of something like a dotcom bubble formed around AI. Thus racing for the singularity is now everyone’s problem.
Bah!
I think there's a bit of a flaw in this post, in overestimating how much the "singularity" idea needs to be taken seriously, from being used a smokescreen.
The "ethics" faction is consistent. They believe that idea is a distraction from racism, sexism, etc. The more it's talked about by "right-wingers", the "ethics" side just says it's blowing more smoke.
What might be called the "drill, baby, drill" faction is basically saying that AI is going to be very powerful, so the US needs to develop it before China does. That's consistent. There's no indication to me that they take any of the fantasies *seriously*, about self-willed rogue AI's destroying humanity. This is the flaw I see, trying to cram a rhetorical pivot into the kind of games intellectuals play. That is, if politician says "Big AI is going to be civilization-changing, (pivot) so we must get it before China", the game is "AHA! You just said it'll be "civilization-changing", so by your own words it could destroy civilization. Thus you must agree we should all be terrified of humanity being destroyed by big AI". Formally, the fallacy is there's a lot of ground between the political statement and the AI fear-mongering, and trying to elide this ground doesn't work in terms of standard language.
Once something is deemed possible, it is unstoppable. We have to focus our energy at finding ways to live post-singularity instead of stopping since it will get here eventually (for our descendents if not for us).
How we do that, besides advertising the WALL-E model, is unclear to me.