18 Comments

Yeah, this is extremely annoying. It's the new geocentirism; people really want to believe that they're metaphysically special in some way, and no amount of evidence seems to be able to convince them otherwise.

If you'll permit a shameless plug, I recently wrote about a particularly bad strain of this in philosophy that uses a bunch of vague jargon to "prove" that human behavior can't be replicated by *any* type of AI, neural network or not. https://outsidetheasylum.blog/zombie-philosophy/

Expand full comment

> It's the new geocentirism; people really want to believe that they're metaphysically special in some way

Well, take you for example: you are implicitly asserting that you are omniscient. Is that not more than a little "metaphysically special"?

> and no amount of evidence seems to be able to convince them otherwise.

Well, what convinced you of your delusion? Might it be that *it seems to be true*?

Expand full comment

Huh?

Expand full comment

Humans typically cannot realize when they are hallucinating.

Expand full comment

You're right of course. They are even being taught to smile, maybe they'll be able have sex, maybe produce babies. There's no limit to what they can do.

The threat is the fear of them replacing us cannot happen unless we let them control us by that fear and think we can be replaced. But they are not us. They might be a species, or might become one. The problem is that man has allowed himself to be replaced. We have been taught we are in competition with each other; with our environment, and so naturally we feel in competition with computers.

The problem is we are replaced by the computer if we remain in competition with it; in the same way our ridiculous competition with our environment is being to be lost; our competition with each other turns us against each other

Competition is fine, but if competition becomes about winning, the victors someday must lose. Competition should not be against our competitor, but with our competitor. If we learned that, maybe we wouldn't feel so goddamned threatened by Aella, by hurricanes destroying houses in a hurricane zone, or by A.I.

Use what we have, not conquer it and try to make it subject to us.

Expand full comment

Once you realize that Humans are essentially biological LLM's, except FAR less intelligent, this whole simulation not only makes sense, it becomes downright hilarious.

This is easy (well, once you understand some essentials of philosophy) to test: next time you encounter a Human telling you the facts about something that is unknowable (assuming you are able to make such a distinction accurately yourself), ask them how they know it to be true, and watch the magic happen.

This planet drives me up the wall, I hope I am not alone here but it sure seems like it.

Expand full comment

I actually think there's something even more fundamental going on. Related to what you're saying about Social Sciences. People have a lot of tacit assumptions about reality, many quasi-Cartesian. And the fact that Connectionist AI succeeded is going to force a rethink on it. I was reading a textbook the other day that treated connectionism and embodied cognition as separate paradigms. Now that we know connectionism works, some efforts to integrate these paradigms should take place.

Expand full comment

If I had to describe it (the "something even more fundamental"), it's that the majority of people (and institutions) have developed very narrow ideas about "what matters" and subsequently narrow epistemologies and intuitions. Academia slowly became dominated by people who cared more about "being good at/in school" than "I don't care about money, I just want to talk about my obsessions with other people who share them" (obviously not real distinct categories, just ends of a spectrum). Tech companies deciding the only metric that matters is "engagement" so that they can sell you more ads manipulating those metrics so much that now Google sucks at web search. Public Health advocates saying the "real" danger of COVID at the beginning was anti-Asian racism, only to switch to overplaying the dangers of COVID after vaccines were available to keep "lockdowns" going. Those are just a few examples of a general "credibility gap" that formerly respected groups now have when interacting with other groups outside their "epistemic silo."

To respond to your specific example, it seems to me (a complete layperson) that connectionism and embodied cognition would have a ton of overlap and not be mutually exclusive approaches (do you want an AGI to be a disembodied spirit answering questions or a android like Data doesn't change the dangers I see from superhuman AI). That's why I have been consistently more impressed by the reasoning of people who fans of both philosophy and science fiction than I have been by the litany of people who use a recent success or failure of a particular AI model to prove/disprove their previous takes correct (almost always the takes are about a particularly provincial use of AI, or so broad as to be meaningless).

Expand full comment

I think fears of superhuman AI must account for both Shannon entropy and Wolfram Computational Irreducibility. Both of these make pattern recognition difficulty vary by data type. In other words some problems are naturally easy to solve and some are naturally difficult. Something like the n-body problem won’t be more solvable for super intelligent AI.

Expand full comment

I think it’s possible in principle to calculate absolute limits on intelligence by applying Shannon entropy to scaling. I don’t have an estimate yet though.

Expand full comment

What methodology would you use to error check your calculations?

Expand full comment

Something like the noisy channel coding theorem is a potential method. https://en.m.wikipedia.org/wiki/Noisy-channel_coding_theorem

Expand full comment

This smells to me like materialism, but you know, applying this *conceptually* to consciousness and culture as mediums is an extremely attractive idea!

Expand full comment

I refer to this type of entropy data type variation as belief possibility spaces.

Expand full comment

IF you really want to "bring together different strands of opposition ...", then I'd strongly recommend not pummeling a weakman of "The Dumb Leftist Who Is A Total Moron And Has No Valid Objections Showing Their Complete And Utter Stupidity Which I Shall Proceed To Demolish".

To wit: "Somehow, among anti-AI types, the line has become "AI doesn't work, it'll never work, and we must stop it now". I can't be the only one who thinks there's a certain fundamental tension in this line!"

The steelman version of this is something like: "AI does not produce the objective super-smart results we will be told it does. That's a con-job by the same type of plutocrats, racists, sexists, etc, who always do this stuff historically, to use technology as cover for shiny exploitative crap. They are proceeding to do so again, and we must stop it now."

Feel free to disagree with that view of course, but it's an argument which isn't self-contradictory in any way. Why is it so hard to even hear it, separate from believing it's not right?

Also, quite a bit of the initial COVID reaction did have a very direct racist aspect to it - remember "Kung Flu"? And many people of all politics thought a once-in-a-century event was not in progress, since those only happen around once in a century. Attacking being wrong in that case reeks of the survivorship bias fallacy.

Expand full comment

> Feel free to disagree with that view of course, but it's an argument which isn't self-contradictory in any way.

If it was, would you necessarily(!) have the ability to know? If so: how?

> And many people of all politics thought a once-in-a-century event was not in progress

All events are unique, you just need an adequately sophisticated algorithm.

Do you even have an algorithm to perform your comparison, or might you be using the simple calculation: "The Reality"?

Expand full comment