Some of this stuff drives me up the wall, tbh. I was listening to Chapo the other day, and they were talking about SORA (the new video generator from OpenAI). Will said something like: “People are always promising the technology is going to get better. WHEN! Where is the evidence of this” (or words to the effect).
FFS, here was “The Lion and the Lamb were Restored” used as a prompt 4 years ago for an image generator:
Here it is now::
Similarly when I am told over and over that LLM’s are “auto-complete on steroids” and this imposes some fundamental, yet unstated limitation on what they’ll be able to do, this drives me nuts. This line:
A) Is a misleading description of the model once it has undergone reinforcement learning through human feedback.
B) Completely misses the key insight behind this era of LLMs. Predicting the next word can be used, with minimal setup, to reproduce useful human behaviors like correctly answering questions (even difficult reasoning questions) and producing requested text.
C) Gives people a false confidence that they understand what is going on inside the model. The fact that something is a model of the most likely next word does not negate the possibility that it has, within its parameters, also a model of how the world works. Past a certain point, understanding the world in at least a functional sense is necessary for predicting the next word.
D) Won’t be very comforting when everyone whose job can be done by receiving text and responding with text has been replaced with “““autocomplete on steroids”’”
What numbers must I show you to make you take this seriously?
I cannot describe how enormously frustrating this is for Philosophy Bear. I have been following this technology for the better part of ten years now. I started following it not long after AlphaGo beat Fan Hui. Do you know how it feels to be patronizingly lectured at a conference on how AI will never achieve a certain goal (that it already achieved)? By a published AI ethics-of-large-language models researcher who didn’t know what reinforcement learning through human feedback until I told him no less! A fellow who started paying (very limited) attention to the area two years ago when grant money became available.
Somehow, among anti-AI types, the line has become “AI doesn’t work, it’ll never work, and we must stop it now”. I can’t be the only one who thinks there’s a certain fundamental tension in this line!
We seem to have gotten here through a path-dependent agglomeration of the main forces opposing AI:
1. Artists, who are primarily leading the anti-AI movement online, hope that by emphasizing its weaknesses, they’ll slow or prevent it from replacing them by maintaining a point of sales differentiation. This is a very short-term strategy! Do you really think it’s not going to be able to draw hands forever?
2. People in the social and human sciences who are discontent that the crude associationist non-nativist approach used by contemporary AI. An approach without much-inbuilt structure or modularity that nevertheless ‘works’. These researchers want to emphasize its faults for basically Kuhnian reasons. They see AI as a threat to their paradigm. Every time they’ve made concrete and precisely specified predictions about things language models will never be able to do their predictions haven’t held up. That won’t stop them from making new ones.
3. People like Chapo who have built up a narrative of tech as moonshine sellers (often justified) and want AI to be a continuation of their narrative about cryptocurrency, NFT’s, etc. What these people get wrong is that you cannot project the trajectory of technology based on vibes, and you shouldn’t confuse the addled brains of Twitter boosterists with the technology itself.
4. People who believe nothing ever changes. One would think that, after their sneering op-eds during the early days of COVID that, of course, there wasn’t going to be an epidemic these people might have learned some humility.
Seriously, I’m not making that last bit up. We’ve all memory-holed it now, but there was a time when the liberal commentariat was convinced we shouldn’t take COVID-19 seriously because doing so would lead to anti-Asian racism. This was published early on: “While addressing the outbreak will take a global public health effort, the U.S. Centers for Disease Control and Prevention has declared the current risk to the American public is low. If 21st-century outbreaks like SARS, MERS and Ebola virus are any indication, it is likely American fear of contracting coronavirus — and the xenophobic, racist assumptions that drive it — carries a risk far greater to most people in this country than the virus itself.” While anti-Asian racism during COVID was a real problem, counterposing this with taking Coronavirus seriously was a mistake- much like counterposing AI bias with AI devouring the world is a mistake.
We need to shift tack. AI is good, AI is extremely good and that is why it needs to be stopped. “The technology is not going anywhere, therefore [by eleven-dimensional ping pong] it needs to be stopped.” Isn’t as persuasive a line as “The technology is going somewhere real and extremely scary and we need to do something about it.” Most importantly, this the line the evidence supports. We need to bring together different strands of opposition- concern about racism, concern about unemployment, concern about world-devourers. We need to think concretely and strategically, not on the basis of vibes. Lazy Twitter discourse won’t cut it, sorry.
Yeah, this is extremely annoying. It's the new geocentirism; people really want to believe that they're metaphysically special in some way, and no amount of evidence seems to be able to convince them otherwise.
If you'll permit a shameless plug, I recently wrote about a particularly bad strain of this in philosophy that uses a bunch of vague jargon to "prove" that human behavior can't be replicated by *any* type of AI, neural network or not. https://outsidetheasylum.blog/zombie-philosophy/
You're right of course. They are even being taught to smile, maybe they'll be able have sex, maybe produce babies. There's no limit to what they can do.
The threat is the fear of them replacing us cannot happen unless we let them control us by that fear and think we can be replaced. But they are not us. They might be a species, or might become one. The problem is that man has allowed himself to be replaced. We have been taught we are in competition with each other; with our environment, and so naturally we feel in competition with computers.
The problem is we are replaced by the computer if we remain in competition with it; in the same way our ridiculous competition with our environment is being to be lost; our competition with each other turns us against each other
Competition is fine, but if competition becomes about winning, the victors someday must lose. Competition should not be against our competitor, but with our competitor. If we learned that, maybe we wouldn't feel so goddamned threatened by Aella, by hurricanes destroying houses in a hurricane zone, or by A.I.
Use what we have, not conquer it and try to make it subject to us.