5 Comments

You keep treating possibilities as actualities. LamDa might be simulating people without being programmed or prompted to, the CR might have full semantics without a single symbol being grounded ..but they might not.

Expand full comment

There isn't any fundamental.doubt about what computers are doing, because they are computers. Computers can't have strongly emergent properties. You can peek inside the box and see what's going on.

The weakness of the systems reply to the CR is that it forces you to accept that a system that is nothing but a look up table has consciousness.. or that a system without a single grounded symbol has semantics. (Searle can close the loophole about encoding images by stipulation).

Likewise, there is no reason to suppose that LamDa is simulating a person every time it answers a request -- it's not designed to do that, and it's not going to do so inexplicably because it's a computer, and you can examine what it's doing.

Expand full comment
author

You can't really examine what it's doing. Famously we have very little understanding of how transformers and other NN's reason, this is the interpretability problem, and for models with billions of parameters it's basically insurmountable at present The closest we can get is sometime establishing that a particular neuron fires especially strongly in certain circumstances (e.g. a positive sentiment neuron) the available results from this area, if anything, support my contention by showing that different neurons tend to track different properties, sometimes highly abstract and semantic properties like the degree of a particular emotion in a text extract.

There's no reason whatsoever to think these models are reasoning via lookup table. This would be totally infeasible given the number of possible inputs there are, the lookup table would have to be larger than the observable universe. I don't think anyone in the debate is saying they reason by lookup table.

We know that, for example, when GPT-2 is taught to play chess by receiving the moves of games (in text form) as input, it knows where all the pieces are, that is to say it contains a model of the board state at any given time. https://arxiv.org/abs/2102.13249 As the authors of that paper suggest, this is a toy model that gives us strong evidence the machine works by world modelling.

The point about vision is larger than that. Even if you banned inputs of the form "the image is XxX and at position 1,1 there is..." there's no reason to think that is a special information or grounding only this can provide, which a vast sequences of words describing all kinds of objects and scenes can't contain. Text is can be just another sensory modality, just as you could learn how the world was laid out only even if you had a sense of hearing, so you could learn how the world is laid out if you only have a sense of text, or so it seems to me, and this seems to best make sense of the results we see.

Expand full comment

We know that the Chinese Room is a look up table because it is stipulated to be.

Computers can't do anything thats unpredictable to a sufficently advanced, Laplace demon like, predictor. AI s.can have emergent features that are unexpected to a finite, realistic, investigator...But only within limits. A system that always boots up in the same state isn't going to retain knowledge between sessions, or tell you what it is like being switched off...investigating that would be looking for a miracle. Building an ungrounded world model is.likelier than building a grounded world model, is likelier than accidentally simulating a person.

Text is different from sensation because it needs interpretation. So you can't assumes that text gives semantic grounding without presuming that semantic grounding is present.

Expand full comment

To me, the essence of consciousness is its experiential aspect, what you call its qualia. You say that a P-zombie would still be worthy of ethical consideration--why? If it has no experience whatsoever, then it cannot feel pain or suffering, have hopes and dreams, etc. Without those capabilities, we need not consider it in our ethical considerations. If I truly knew that a "person" I was talking to was a P-zombie, on what grounds would it be wrong for me to cut off its arm or even destroy it? In what meaningful sense is this any different from, say, destroying a Furby?

Given my premise that consciousness requires experience/awareness/qualia, to answer that "the room itself" is conscious requires a further assumption--that some form of panpsychism is possible. You have to believe that consciousness/awareness can inhere or be correlated with complex systems that are not brain-bound. This leads us into philosophical territory that many people are not willing to enter. It would seem to indicate that any complex system is in some way conscious. See, for example, the argument that if the kind of functionalist/materialist theory of mind you are here proposing is true, then the United States is "conscious": http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-140721.pdf. I am willing to go here, but that is because I am not a materialist, so I am perfectly happy with a view of the world that sees consciousness/spirit inhering in some mysterious way in all things. I somewhat doubt that this is as comfortable a position for materialists/functionalists like yourself.

Expand full comment