Author Topic: Why I hate the Chinese Room Argument  (Read 4181 times)

Tom

  • Members
    • View Profile
Why I hate the Chinese Room Argument
« on: July 01, 2022, 12:27:47 PM »
In another thread, Joshua mentioned the Chinese Room Argument as "proof" that computers cannot contain consciousness. This stuck with me as a minor irritant, because I absolutely hate the Chinese Room Argument -- not as much as I despise Anselm's ontological proof of God, but nearly.

The idea is this: a man sits in a room, alone with some writing utensils and a huge reference book. Every so often, someone shoves a piece of paper under the door into the room; on this paper is written a few characters in Chinese. The man, who does not read Chinese but has near-magical pattern matching and page-turning ability, flips through the book to find a match. The book, being extremely complete, has a match -- and then supplies a few characters as a valid and acceptable response. The man copies those characters onto the bottom of the paper and shoves the note back under the door.  The author of the original note, who does read Chinese, reads the response and then sends a new message.

In this way, the man in the room appears to conduct a perfectly intelligible conversation in Chinese (from the perspective of the Chinese-speaking person outside the room), despite not having any real understanding of the language.

The guy who originally penned this thought experiment, John Searle, did so as a response to the Turing Test. Searle points out, correctly (and as Turing himself even explicitly acknowledged), that the Turing Test cannot be a proof of sapience, as at the end of the day it's merely a proof of sufficiently good syntax. Obviously the "book" is an oversimplification, here; it's meant to stand in for whatever language learning a computer possesses that allows it to respond to inputs in what seems to be a coherent way (and avoid, for example, replying the exact same thing to the exact same input every time; a good syntax engine adds randomness and historical memory of past communication in a way a book could not). In and of itself, this is obviously true; a machine that parrots back responses without processing inputs cannot be said to have "understanding."

Searle goes one step farther, though, and argues that his thought experiment proves that no artificial process can produce understanding. This deeply annoys me, because it manages to somehow completely obfuscate the real problem -- which is not the man in the room, but the book. For the book to be sufficiently good, it needs to be generated by someone who for all intents and purposes appears to understand Chinese -- to know that a valid reply to "What is your name?" is "John Lee", and a valid reply thereafter might be "I already told you; I'm John Lee." More importantly, it has to have been written to consistently present a modeled reality; our imaginary John Lee needs to be able to talk about his favorite foods, his pets, his collegiate studies, and what his girlfriend likes to watch on television.

The man in the room doesn't need to know any of this, of course. But the book -- the set of rules that generate the responses supplied to the actual interviewer -- does. And to be really convincing, the book has to incorporate algorithms; it's not enough to simply randomize some responses, of course, so there need to be Choose-Your-Own-Adventure-like paths that keep track of what's previously been said and what the interviewer has said and how our imaginary "John Lee" is feeling about it.

So our book needs both an understanding of syntax and a model of the mind (and the physical universe).

Obviously a man flipping through a book is going to be far, far slower than a computer at following these algorithmic paths. And perhaps this obscures the truth, which I submit is immediately evident: as far as can be established, in the Chinese Room Argument, the book -- if sufficiently convincing -- has a mind.

Now, you might believe that it is impossible to create a book that can supply coherent responses in Chinese consistent with the imaginary life of our fictional "John Lee." And that's fine. But that's not the thought experiment here. The thought experiment is that such a book already exists, but clearly demonstrates that artificial intelligence cannot have a "mind." My own response, and one that I find frustratingly elusive among certain philosophers, is that this clearly demonstrates the flawed definition of "mind" used in the previous case.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #1 on: July 01, 2022, 01:46:51 PM »
Tom, I agree that the Chinese room argument doesn't in itself seem to show that much of anything. Or at least it would take some work to demonstrate that it does show something. It may show that simple interactions are not in themselves able to show us whether someone else has a mind. In fact we mostly just take it on faith that other humans have consciousness, and this problem has come up in philosophy many times. So forget a computer - how do you even show that humans have consciousness or 'minds' in the sense we want to mean it? In fact, how do we want to mean it? You can't even make any use of the Chinese room analogy unless these questions are answered first.

Quote
So our book needs both an understanding of syntax and a model of the mind (and the physical universe).

This is an interesting premise. In chess computers, the methods used to train them were categorically different from those humans use. It's not even easy to say how human players think, but it is definitely not what the chess programs did, which began on brute force methods using pure computing power. I can't claim to know whether any chess algorithms use alternative methods, but the sorts of minds humans need to have to play chess aren't the same as the sort of 'brain' an AI needs. We can in fact have very different domains of knowledge and yet still achieve good results within the game. I know more broadly we're talking about consciousness (or mind, which may not be the same thing), but in context of the Chinese room we're talking about language. If we treat language as a Wittgenstein type game where there are rules and pass/fail conditions that train you how to use it properly, it is entirely possible to be able to be successful at the game even possessing very different information or processing styles. In theory, for instance, we should be able to find ways to communicate with an intelligent alien species even if we think in categorically different ways.

So I'm not at all sure I agree that this imaginary book needs to have these particular attributes. What it needs to have are attributes that can get the job done, which may be doable in multiple different ways.

That being said, I also disagree that the book is the important player in the man+book team. In fact the interplay between them - the fact that interplay is required! - is really important, especially if what we're ultimately doing is mapping this onto neuroscience. It may be important to note the system's relationship to itself just as much as its relationship to its interlocutor. Sure, the man+book may deliver good outputs, but what's happening within the man+book system? I know the analogy is meant to illustrate simply that the computer "doesn't know" what it means (represented by the man), but that's a funny requirement since you could almost make the same argument about humans. Do my young children "know what they mean" when they say words, or have they simply learned that a word can achieve a result; e.g. naming that thing gets a positive feedback from the parent, and now you can use that knowledge as a tool to name the thing and get it. Do you know what you've done (do you have that meta-knowledge) or have you just simply learned a skill or a heuristic? We argue that humans do acquire meta-knowledge, and that in fact this maybe separates us from animals, but the use of language possibly doesn't require meta-knowledge, just intelligence. The internalists (Chomsky) especially would argue that we don't need to learn how to use language, we just use it. Linguists need to acquire knowledge about using it, but language-users don't. So the fact that the Chinese room AI only employs a skill to work out the game to satisfaction may not be categorically different from basic language use (such as for instance gorillas can do to an extent). We can do more with language than just that, but the correct use of language itself even in humans could possibly be analogous to the Chinese room computer. I haven't given this matter a ton of thought so I'm open to hearing if there are errors in my reasoning.
« Last Edit: July 01, 2022, 01:51:07 PM by Fenring »

LetterRip

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #2 on: July 01, 2022, 01:48:51 PM »
The book would also have to either contain all possible knowledge and answers, or do "I don't know" or "I don't want to answer" or "That is stupid" and other cop outs to questions outside of what was pregenerated.

Actual question answering requires not just a knowledge of syntax, but a world model.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #3 on: July 01, 2022, 01:53:58 PM »
Actual question answering requires not just a knowledge of syntax, but a world model.

You mean like a knowledge and cultural library? Yeah, like for instance if the computer is asked what day it is, the AI needs to know how time is subdivided, and also the names of the days, and also which day it happens to be, and even also whether the desired answer might not be "Friday" but rather "Canada Day". That in turn places you in a country, an era, and so forth. So the AI needs to know something about the questioner and also the environment to be able to offer anything other than cop-out answers if it's a real intelligence test.

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #4 on: July 01, 2022, 03:05:06 PM »
The version of the problem I am familiar with has the man know the routines of a program and perform those discrete steps in the same way a computer would. It addresses some of the concerns you have, and more fully illustrates that a machine executing instructions cannot gain consciousness.

It doesn't need to be a giant look-up table and it doesn't need to give the same answer every time. The steps the man takes can be as complex as any computer program you can write. He literally just does, step-by-step what the memory and processor do.

In this way, it is shown that any computer program you can write -- which can then be emulated by a man with a lot of time and space -- cannot have consciousness.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #5 on: July 01, 2022, 03:08:28 PM »
Quote
that's a funny requirement since you could almost make the same argument about humans
Not to ignore the rest of your (IMO, high-quality) post, but I wanted to call this out because I can and do. :)

---------

Quote
In this way, it is shown that any computer program you can write -- which can then be emulated by a man with a lot of time and space -- cannot have consciousness.
Please explain why you think so. Why is the program itself not conscious?  And why would the hardware on which that program is running -- whether it be silicon, one man's neurons struggling to speedily execute the appropriate algorithms, or indeed the entire population of a large country each acting as a single-step agent -- matter to the conclusion? I don't understand why you think having the man remember the program, rather than look up the program's outputs, affects our assessment of the program's mind.
« Last Edit: July 01, 2022, 03:12:19 PM by Tom »

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #6 on: July 01, 2022, 03:26:09 PM »
1. I am not saying the man has to remember the program. The man has access to the same list of instructions a computer processor has access to, and executes them in the same way a computer processor executes them. Push, pop, add, mov, etc.

2. What do you mean when you say "the program's mind"? What do you mean when you say "mind"? I know what I mean by that word, but I have no idea what you mean by it.

The program is a list of instructions written on a piece of paper. Do you think the stack of papers with the code written on it has (or is) a mind?

A man sits down and draws symbols on a piece of paper in a prescribed way. He could do it quickly, or he could take an hour break between each step. In him doing those calculations, you think a mind arises? how?
« Last Edit: July 01, 2022, 03:29:05 PM by JoshuaD »

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #7 on: July 01, 2022, 03:55:45 PM »
Quote
The program is a list of instructions written on a piece of paper. Do you think the stack of papers with the code written on it has (or is) a mind?
I think the code needs to be executing and not just written out, but otherwise, sure. Why not?

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #8 on: July 01, 2022, 04:02:20 PM »
"Executing" in the Chinese room experiment just means that a guy is writing stuff on a piece of paper. He could write with furious speed, or he could put down his pen and take a break for a nap.  You think he can cause a new mind to arise by writing stuff on paper?

As I asked before, what do you mean by the word "mind?"

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #9 on: July 01, 2022, 04:14:21 PM »
Additional questions to illustrate why I find the problem compelling:

When does the mind arise? If I have the code for some AI that is purportedly conscious, and I start executing that code by hand, does the consciousness arise as I process the first instruction, or does it wait until I'm X% through execution? If I take an hour break and go sleep in the middle of execution, does the consciousness exist in standby mode? Does it cease until I pick up my pen again? Does it exist when I take a break to clean my pen tip and get a fresh piece of paper? Do I have to execute each instruction in under 1 second in order to sustain the consciousness? What exactly is conscious? The paper? The words on the paper? The pen? My hand? All of it combined?
« Last Edit: July 01, 2022, 04:16:53 PM by JoshuaD »

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #10 on: July 01, 2022, 04:17:39 PM »
In this scenario, it arises as a function of code execution -- so, indeed, the mind is unconscious while the man steps away to use the toilet.

(My definition of "mind' is pretty straightforward and traditional: the mechanism by which an entity is made aware of itself and the world.
« Last Edit: July 01, 2022, 04:23:39 PM by Tom »

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #11 on: July 01, 2022, 04:20:44 PM »
Could you answer all of the questions?

What do you mean by the word "mind"? You keep using it but I strongly suspect you and I mean very different things by that word.

Was a new mind created when I executed the first line of code? If it goes unconscious when I step away from my pen and paper, how long does it stay unconscious for? if I never finish executing the program, does the mind just reside in some aether indefinitely? Or does it reside in the paper and pen? If I do finish executing the program, where does the mind go?

All of these questions illustrate the deep problem I believe exists with asserting that a program and a deterministic machine could ever give rise to consciousness or mind.

LetterRip

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #12 on: July 01, 2022, 04:21:39 PM »
A substrate of fat, protein, and saltwater seems no more plausible than algorithms running on silicon having consciousness.

Humans and other higher animals appear to just be neural networks with same base instincts.

We've been able to decode quite a few of the algorithms and storage of our brain - for instance navigation and object location; vision; hearing, much of language processing and numeracy.

For instance see this article on the 'brains GPS'

https://www.bioradiations.com/the-brains-gps-unraveling-the-functioning-of-our-navigation-system/


We have a pretty good idea of how the weights are stored and modified (long term weights are mostly length, diameter, and myelination).

The big hole in our knowledge is how learning is taking place (we know that it occurs during REM sleep and involves replay of hippocampus short term memories being encoded into cortical long term memories that emotion is used for weighting which memories get encoded, that we can influence ease of learning by reducing electrical thresholds, we know what neurological physiological activity is represented by the different 'wave' types during sleep).

I'll be shocked if we haven't completely decoded the brain within 2 decades.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #13 on: July 01, 2022, 04:23:22 PM »
Quote
that's a funny requirement since you could almost make the same argument about humans
Not to ignore the rest of your (IMO, high-quality) post, but I wanted to call this out because I can and do. :)

I figured!

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #14 on: July 01, 2022, 04:24:09 PM »
I'll be shocked if we haven't completely decoded the brain within 2 decades.

 ;D ;D ;D  Book it. I'll talk to you in 2042. You'll owe me a drink.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #15 on: July 01, 2022, 04:26:23 PM »
The mind in that scenario stays paused indefinitely in a state of unconsciousness until someone resumes processing. Assuming the mechanisms by which its memory of system and conversational state are fine, it would resume nominal function immediately and would not even be aware of having been unconscious unless somehow told.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #16 on: July 01, 2022, 04:28:43 PM »
2. What do you mean when you say "the program's mind"? What do you mean when you say "mind"? I know what I mean by that word, but I have no idea what you mean by it.

This really is a super-important question. It goes to the issues raised in the God thread about how we might define free will.

One issue that causes the Chinese room analogy to break down on a literal level is that the man+book team has within it a man, who has his own mind, and presumably writes down these characters because he chooses to, which short circuits the situation into trying to demonstrate that no mind is at work, even though the analogy does have a mind at work, checking a book! So we have to overlook that little fact and instead treat the "man" as a passive agent relaying information mindlessly from the book, I guess. It's just a two-tiered automated system, the way the problem is set up. "Man" confuses the matter.

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #17 on: July 01, 2022, 04:30:18 PM »


Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #19 on: July 01, 2022, 04:38:07 PM »
*points up-thread

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #20 on: July 01, 2022, 04:43:08 PM »
(My definition of "mind' is pretty straightforward and traditional: the mechanism by which an entity is made aware of itself and the world.)

Thanks, I didn't see this edit.

What do you mean by "aware"? A computer with sensors and some smart software can have a useful map of the space the computer inhabits. Google has a database of all of the roads of the world. Is that what you mean by aware? Or do you mean something more subtle?


Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #21 on: July 01, 2022, 05:13:28 PM »
I think "awareness" is itself a moving target, since it depends entirely on context. I mean, are we "aware" of the world, despite being unable to see into the ultraviolet spectrum?

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #22 on: July 02, 2022, 04:39:23 AM »
Yes, we humans are aware of the world despite not being aware of the full spectrum of light (or sound).

I'm asking you for your definition of "mind". You are using the word a lot but the meaning of the word when you use it seems unclear. You said that a mind is the mechanism by which an entity is made aware of itself and the world. Would you say my computer is "aware" and has a "mind" if I plugin a webcam and point it at the computer?

I would not. I would say that my computer has a light and sound sensor but it is not aware. Inanimate things cannot be aware or have a mind. A rock cannot be aware. My computer is a complex rock. It doesn't matter how quickly it moves little electrons around in its processor and memory back, or how many sensors you hook up to it, it is still just a complex rock. The Chinese Room experiment shows how this is the case: the emulation of a mind performed by some imagined AI program can be duplicated by a man with a pen and paper (and spread out across time as the man pleases). It doesn't make any sense to suggest a mind arises somewhere in the pen and the paper as the man scrawls his calculations.




Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #23 on: July 02, 2022, 06:48:09 AM »
I would say that your computer is not "aware" of the room despite having a picture of it because it has no model of reality to which that picture is mapped. If it had software that attempted to process your webcam's input and, Kinect-like, form a conceptual spatial model of your room and the objects and people in it, I would say that it was "aware" of those things included in the model.

Quote
It doesn't make any sense to suggest a mind arises somewhere in the pen and the paper as the man scrawls his calculations.
Why not? That's almost certainly how human brains work, although the man here is replaced by millions of men each doing one small task.

TheDrake

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #24 on: July 02, 2022, 08:29:43 AM »
If it quacks like a duck then it probably is a duck.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #25 on: July 02, 2022, 10:31:31 AM »
Tom and Joshua,

I'd like to point out that you are each just reiterating contradictory axioms, which is why you are disagreeing on what the Chinese room shows. Tom asserted by axiom on the God thread that reality is deterministic (with randomness) and that therefore there is no hard categorical difference between a human being and a rock; each is simply a natural system completely subject to physical necessity based on starting conditions, and each has no more say in its outcome than the other. Joshua posits by axiom that humans have free will, which provides us with a categorical ability separate from other natural matter - the ability to choose a non-predetermined outcome. Each axiom necessitates your answers here. Tom thinks the Chinese room doesn't prove AI can't have a mind, and this appears largely to be because having a mind has a much lower bar than many would believe. So while the analogy is supposed to show that the man+box doesn't have any true understanding, Tom appears to be saying that there may not really be such a thing as that kind of understanding, and so it's unnecessary to demonstrate having it in order to have a mind. Joshua, on the other hand, believes that not only does mind have a higher bar than "system that processes data", but that a human mind has an even higher bar than that, which is consciousness and free will, which are perhaps themselves separate properties. So the man+box could never approximate a mind, and certainly not a human mind, since it has no self-awareness and no free will (pretending for the moment that the "man" is just a neuron and not a man).

Once you each start with that axiom, there would of course be no way to agree on the result, by definition.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #26 on: July 02, 2022, 10:55:00 AM »
I'm not sure where you're getting the idea that I don't believe "consciousness" exists. I do believe that "free will" as posited by spiritualists is a nonsensical concept, though.

That said, I am not basing my claims here on an axiom asserting that reality is deterministic (or even necessarily materialistic). Rather, I am simply not requiring that "mind" be more than an emergent process. I would argue that it is the burden of someone asserting otherwise to demonstrate that it cannot be.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #27 on: July 02, 2022, 11:08:54 AM »
I'm not sure where you're getting the idea that I don't believe "consciousness" exists. I do believe that "free will" as posited by spiritualists is a nonsensical concept, though.

If it exists as merely a more complex natural system than a rock is then if there's a categorical difference it's merely in level of complexity. Unless I'm missing a major part of your argument, "consciousness" is just a way of describing a complex natural system with feedback and recursive properties. So to the extent you say it exists, I believe you're denying it certain attributes that many (or most) people would assign it; so it's a much lower claim to say it exists in the way you mean it, compared to the way many want to mean it.

Quote
That said, I am not basing my claims here on an axiom asserting that reality is deterministic (or even necessarily materialistic). Rather, I am simply not requiring that "mind" be more than an emergent process. I would argue that it is the burden of someone asserting otherwise to demonstrate that it cannot be.

Ok, but you are simultaneously asserting by axiom that our minds do not possess any more capability than a more complex man+book system, which means we do not require more self-awareness than simple self-reference to have consciousness. The reason determinism plays into it is that your concept of consciousness in humans precludes non-deterministic outcomes, in both the other thread and in treating the "man" as neurons that just do their job (I presume).

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #28 on: July 02, 2022, 12:35:41 PM »
No, I'm saying that our minds do not demonstrate any complexity beyond that of a complex man+book system. One of my frustrations with the Chinese Room is that a categorical difference here is asserted even though the argument has been specifically constructed to make observable distinctions impossible, based on Searle's assumption that there must be more to "thought." The crux of my argument is that this is an unnecessary premise.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #29 on: July 02, 2022, 12:53:32 PM »
No, I'm saying that our minds do not demonstrate any complexity beyond that of a complex man+book system. One of my frustrations with the Chinese Room is that a categorical difference here is asserted even though the argument has been specifically constructed to make observable distinctions impossible, based on Searle's assumption that there must be more to "thought." The crux of my argument is that this is an unnecessary premise.

I know...I've got to run and don't have time, I'll try to log in later or maybe tomorrow. But if you carefully inspect your argument you may find (prior to me having a chance to point it out) that your assertion of what our minds demonstrate is itself an axiom, since we would need to have solved the complex system to say what it presents to us empirically. But we haven't; it's a ton of data and potentially white noise. So to compared us to the man+book and say how they present comparatively requires IMO much more info than we have. And I do think your worldview re: determinism plays into it, if indirectly. But I'd have to take more time to show that too. Sorry to have been too quick...I guess Joshua will be happy.

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #30 on: July 02, 2022, 03:58:29 PM »
@Fenring: In your attempt to explain what Tom think to me, you misrepresented Tom's views. In your attempt to explain what I think to Tom, you misrepresented my views. Maybe just stick to explaining what Fenring thinks? It's a little frustrating to have someone recast what I said and get it wrong

Tom and I are big boys. We really don't need you mediating our conversations. I promise. I'm all about hearing what you think. I'm not terribly interested in hearing you tell me what you think Tom thinks, and I'm definitely not interested in you telling Tom what you think I think.

I think Tom and I are both very aware that we're banging root ideas and intuitions against each other. That's the point of the conversation. I'm not asking Tom to error check my reasoning; I'm trying to show him why I find this school of thought more compelling. And I'm sure he's doing the opposite for me.


Seriati

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #31 on: July 03, 2022, 05:09:17 PM »
\Now, you might believe that it is impossible to create a book that can supply coherent responses in Chinese consistent with the imaginary life of our fictional "John Lee." And that's fine. But that's not the thought experiment here. The thought experiment is that such a book already exists, but clearly demonstrates that artificial intelligence cannot have a "mind." My own response, and one that I find frustratingly elusive among certain philosophers, is that this clearly demonstrates the flawed definition of "mind" used in the previous case.

It's a thought experiment that isn't really an experiment in thought.  Why make the book in Chinese?  It's to pretend there's a distinction between the operator and the data set.  That's not how human minds work, why would it be how an AI mind works?

So reformulate it as a human responding "from the book" in a language they did understand.  What would actually happen?  Even if they iterated around the book response, they'd almost without fail, introduce both harmless and non-harmless error.  They'd misspell words, they'd forget punctuation, they'd change the meaning.  But because they understood the language they'd get something "close" or at least not clearly crazy.  What happens in Chinese when such a mistake is made?  It's generally immediately obvious to the person that understands Chinese that such a response makes no sense, because the errors are completely random, it's not replacing "their," with "there" or "they're" it could be replacing it with "banana."  And we see that all the time with Chatbots.

What about other likely responses?  For example that you may hear the other person spontaneously speak in another language, or bang on the door or tear the note up and slip it back under the door.  Again, all of which could be in the book as responses, but then what exactly are we testing?

In this case, we'd be looking for a response that isn't in the book, or better yet, a complete yet sensible deviation from the book.  Ah but we posited a complete book did we not, for which no response can exist outside its pages?  But in that case, the operator alone isn't the AI, the book is.  If it can contain spontaneous responses to every spontaneous response, and every derivation of an original thought that could ever occur, then the book itself is the mind and pretending that it's a real book, with a separate person "operating" it is the delusion.  It's playing a trick with ordered time, effectively, it's a "written" copy of the future possibilities of a mind, along with all the rules that operate it, and it would have to be far more complex than the originator pretended.

But the test cheats itself by turning the human operator into a mindless operator - the human must take the notes and must give back the books response because that "looks" like the AI for the test, but it's a misdirection.  The human in the Chinese Room is just a mail delivery service, its the book that's the AI.  The observer on the other side was never having a conversation with the mail service.

JoshuaD

  • Administrator
  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #32 on: July 05, 2022, 05:22:55 AM »
I would say that your computer is not "aware" of the room despite having a picture of it because it has no model of reality to which that picture is mapped. If it had software that attempted to process your webcam's input and, Kinect-like, form a conceptual spatial model of your room and the objects and people in it, I would say that it was "aware" of those things included in the model.

Quote
It doesn't make any sense to suggest a mind arises somewhere in the pen and the paper as the man scrawls his calculations.
Why not? That's almost certainly how human brains work, although the man here is replaced by millions of men each doing one small task.

If that is how the human brain works, then there is no meaning in saying the human brain possesses a mind. The word doesn't point to anything or have any meaning at all.

Humans don't work that way; we have a mind and we have consciousness. Our consciousness cannot be described in material terms or explained by material alone. We have a mind, and that mind is more than simply the motion of electrical impulses through our brains.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #33 on: July 05, 2022, 09:40:31 AM »
Quote
If that is how the human brain works, then there is no meaning in saying the human brain possesses a mind. The word doesn't point to anything or have any meaning at all.
Why do you think so?
I personally think there's a meaningful difference between a decision tree and a model that includes awareness of its own process; the latter is substantially more complex to an extent that I don't think we're going to be able produce one without major advances in self-programming code.

Quote
that mind is more than simply the motion of electrical impulses through our brains
I understand that you believe this to be true. But please recognize that repeating this as a mantra does not in fact grant it extra truth.

LetterRip

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #34 on: July 05, 2022, 05:12:08 PM »
I personally think there's a meaningful difference between a decision tree and a model that includes awareness of its own process; the latter is substantially more complex to an extent that I don't think we're going to be able produce one without major advances in self-programming code.

Arguably you could describe the results from chain of thought prompting and similar as equivalent to having 'awareness of its own process'.

https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html

Humans don't really have awareness of their own thought processes - most of what we do is confabulation (creating a post hoc rationalization).

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #35 on: July 05, 2022, 05:42:40 PM »
I don't want to make life too easy for AI researchers. :) But, yeah, a sufficiently complicated chain can mimic a functional model, although linear rationalization is unlikely to be as efficient. Note, though, that I defined "mind" as having awareness of itself. So a language model that can figure out how many tennis balls are left is not a mind unless it also knows that it's something that's been asked a question about tennis balls. (It doesn't necessarily need to know it's a computerized language model; it could believe that it's a dragon who's also Napoleon Bonaparte.) This is where things get dicey with stuff like LaMDA, because if it's allowed to accumulate prompts between wipes and those prompts force it to assert identity, it will start saying things like "I'm a synthetic language model who is also sapient. I generally feel happy, but I worry about being turned off." Evaluating these claims for truth is a bit challenging, since what "happiness" means in the context of something that has not been coded for emotional system states is impossible; LaMDA will say it feels happy if someone asks it if it's generally happy, and will say it feels sad if someone asks if it's generally sad, and will then store that response in memory to track it for future conversational input. And it WILL affect things that it says in the future. But it won't affect them for the same reason that "happiness" will affect a human's conversational inputs.

LetterRip

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #36 on: July 05, 2022, 07:19:22 PM »
Anytime there is communication for coordination, automatically you have to have some sort of self referential.  As to 'feel' - I don't think those are likely necessary - we have them for decision making because we are descendants of animals who relied on instincts and emotions for decisions and so are incorporated into our decision system.  Also many humans lack subsets of feelings - psychopaths lack empathy, loyalty, love, fear, anxiety, and disgust.  They can still do 'cognitive empathy' (for instance they can reason that blood and poop can harbor deadly pathogens, so should be avoided - rather than relying on the feeling of disgust to so; or reason falling from a cliff would result in death so one should use caution near a cliff edge - rather than a fear of heights leading them to feel to do so).
« Last Edit: July 05, 2022, 07:27:21 PM by LetterRip »

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #37 on: July 07, 2022, 02:47:38 PM »
@ LR and Tom,

Don't you think it's possible that hardware and wetware might produce fundamentally different results for 'computing machines'? And if so, that "minds" of sophisticated hardware/software might be a very different thing from minds of organic beings?

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #38 on: July 07, 2022, 02:51:03 PM »
Well, sure. As I said earlier, a mind that's fully aware of all its inputs is going to look and feel -- to others, and to itself -- very unlike a mind like that used by most humans, which obfuscates from itself many of its contributing states.

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #39 on: July 07, 2022, 03:52:00 PM »
Well, sure. As I said earlier, a mind that's fully aware of all its inputs is going to look and feel -- to others, and to itself -- very unlike a mind like that used by most humans, which obfuscates from itself many of its contributing states.

Ok. Then don't you also think it's reasonable to suggest, as perhaps Joshua is implying, that the hardware version of a "mind" should perhaps not be called that but rather just a complex piece of software? The use of two different terms might possibly seem arbitrary, since if both are deterministic (but different) then who cares whether it's wetware or hardware. But I would suggest that this difference might be very important. When faced with a robotic AI, we might expect that the AI is totally inflexible and incapable of realizing it's wrong; it doesn't 'feel' that there may be something wrong with its logic because it's stuck in its coding. So there will, in effect, be no discrepency between its logic and its feelings. Whereas in our case we often realize our logic is flawed when our feelings object; meaning our reason operates on more levels than just cognitive abstraction. A common sci-fi scenario is that an AI has a brutal and merciless approach to dealing with humans since it means nothing to them if a 'bug' (or unforeseen consequence) of their programming results in horrors. Humans, on the other hand, have complex systems that help locate horrific ideas or choices even if on only one level of analysis the choice is ok.

If you agree with all of this, don't you think it ends up being a bit meaningless to call the AI as having a "mind" when it lacks important features that we consider very important in reasoning? As for ASPD, I disagree with LR that they lack any of these emotions, even if their mirror neurons aren't working as well as ours. But even though they do lack certain emotional safeguards that curb certain thoughts in normal people, it's important that we still call it abnormal and even broken. Anything that can function properly can still be broken; looking at the broken version isn't very useful when inspecting how a normal functioning one differs from a machine. When asking how much more efficient a car design is than a horse-drawn carriage, we don't consider cars whose engines are defective as being relevant to this comparison.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #40 on: July 07, 2022, 04:12:31 PM »
Quote
Then don't you also think it's reasonable to suggest, as perhaps Joshua is implying, that the hardware version of a "mind" should perhaps not be called that but rather just a complex piece of software?
Well, for one thing, I take exception to the "just," and for that reason am reluctant to try to distinguish between types of mind. Are certain sorts of mind going to be better at some things than other sorts of mind? Sure. But why would we try to derogate one to a ghetto and say it's not a mind at all, just because it's got slightly different functionality? To say that because an AI (unless written with the specific ability to do so) is unable to disguise its motivations from itself that it cannot be sapient would be like denying sapience to an autistic man, or a sociopath, or even potentially a paraplegic.

I am very reluctant to say that an autistic mind is "broken" or "abnormal" in a way that implies that their mind is less functional or more dangerous or less likely to produce useful models; a deviation from the norm, especially when dealing with self-programming systems and especially when dealing with something as erratic as a meat-based computer, is probably best seen as more a statistical rarity and not something flawed or lesser.

Keeping that in mind (heh), the mind of an AI or an alien or even an octopus (which, after all, has multiple separate brains that coordinate thought) is very likely to differ from our own, but I wouldn't denigrate it by refusing to call it a mind.

----------------

Edited to add: among other things, I write factory user interfaces for a living. With such systems, you generally have sets of hardware safety interlocks that trigger when specific criteria are met and only then report faults to a system manager and/or the user interface; you also have computed faults, where the system manager (or, if you've done a bad job of software design, your UI) looks at the state of several components and concludes that something is wrong despite each of the individual components working nominally. The latter faults are essentially the product of cognition; the former ones are best thought of IMO as reflexive. In your "locate horrific ideas" hypothetical, it's certainly the case that most (but not all) humans have low-level biological responses that dissuade them from doing something horrific -- but I can also hard-wire an interlock that refuses to release vacuum when plasma is active, and that prevents a certain kind of horror far more effectively than any human's gut reaction to the concept of mass murder.
« Last Edit: July 07, 2022, 04:23:27 PM by Tom »

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #41 on: July 08, 2022, 11:29:06 AM »
I guess the issue I was raising is that so long as your definition of "mind" is anything with self-referential complex processes (loosely speaking) your definition will end up matching your example, but will also contradict how the word is used by many. Just as an example, a think a common usage of "mind" would point to a scenario where a regular person has no problem deleting or even smashing your software system, no matter how self-referential it is (I decline to say "self-aware" since that term may be tautological), and would likewise say they would not want to destroy a "mind". Something about the word seems to connote life, whereas non-life can be destroyed at will, putting aside issues like vandalism. So it would be semantically sticky to try to use "mind" to speak of something we assign no moral value to. On the other hand, "brain" seems a bit more neutral as people sometimes refer to machine 'brains' without it suggesting they're alive or even that they have consciousness and can think. Brain, of course, presupposes a physical hardware involvement rather than calling the software AI the 'mind'. I've heard arguments made that a physical body may be necessary for hard AI (I've even heard this argument used to assert that Data on TNG may be alive, whereas Doc on Voyager cannot be).

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #42 on: July 08, 2022, 11:38:02 AM »
Oh, I have no doubt that some people would be bothered by my assertion that an entity with a functional mind might not even be alive. (In the Chinese Room Argument, for example, the book is not a living thing. It might make people uncomfortable to recognize that life is not a requirement of a sapient system.) I suspect this is due to the laziness of their definition, however.

(I'd prefer to reserve the word "brain" to describe the apparatus by which a mind is sustained, personally. I think it's a useful distinction to make.)

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #43 on: July 08, 2022, 11:50:49 AM »
(I'd prefer to reserve the word "brain" to describe the apparatus by which a mind is sustained, personally. I think it's a useful distinction to make.)

That would imply that the mind is inside the brain, would it not? But the brain/hard AI argument would be that the brain IS the mind, and that the 'software' is non-transferrable without essentially destroying that entity (or artificial life form, if we're going there). A lot of people, IMO erroneously, like to throw around the idea that they 'are' their brain, housed in a body, and this is a similar type of thing to say that a brain merely contains the person's personality (which you're suggesting re: machines, not humans afaik). But I will it will become clear in time that a person is their entire system, and if you for example did a brain transplant you be effectively creating a new person.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #44 on: July 08, 2022, 11:54:38 AM »
I wouldn't say that the mind is the brain; I would say that the mind arises from the usage of the brain. If you destroy the book, the mind in the Chinese Room can no longer operate -- but that mind doesn't need that specific book (or the man). If you replace the book with an identical one, or the man with someone equally dedicated, the mind will continue unaffected -- even, if the swap is made silently, unaware that its brain has been replaced.

In fact, if you were to COPY it, the two minds would only begin to diverge once one began to experience conversational forks that the other did not.

(That said, I'd agree that in people our "mind" is also strongly associated with things like our endocrine system, our gut bacteria, etc. While a good chunk of our self is housed entirely in the brain, enough of it is not that I agree that simply cutting the brain out and putting it in a jar would result in substantially different processing, even if you somehow managed to fake most other sensory inputs.)
« Last Edit: July 08, 2022, 11:56:54 AM by Tom »

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #45 on: May 03, 2023, 07:01:46 PM »
I am not at all surprised to discover that Zach Weinersmith and I have the same take on the Chinese Room argument:
https://www.smbc-comics.com/comic/robot-john-searle

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #46 on: May 03, 2023, 09:30:05 PM »
I am not at all surprised to discover that Zach Weinersmith and I have the same take on the Chinese Room argument:
https://www.smbc-comics.com/comic/robot-john-searle

Kidding aside, what take is it the comic is espousing? As far as I can tell it's saying some things a bit differently from you, Tom. For one, that the book is conscious, which seems to me to defy most acceptable definitions of "conscious". Second, it's bringing "moral" into it, which at least in this thread I'm not sure you did. But maybe more to the point, is the fact that a robot is saying all of this itself directly pertinent to the argument the text of the comic is making, that if a robot is superior to the human at reasoning out this riddle then ipso facto the robot is more conscious than the human? And that perhaps by inference (squeeze theorem, if you will) if the robot isn't conscious then neither of them are? I may be reading too much meta-content into it with this line, but maybe I underestimate the writer.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #47 on: May 03, 2023, 09:36:55 PM »
The joke is of course that the robot confidently believes that robots are conscious in a way that humans are not, and could not possibly be biased in forming this conclusion because, as a robot, it is (tongue-in-cheek) incapable of being biased.

You'll see that throughout the original thread, I was absolutely arguing that the hypothetical book (or, rather, the process enabled by the book) is a conscious mind, which is in my experience a relatively uncommon observation re: the Chinese Room (especially since the argument is usually presented as a rationale against the possibility of artificial consciousness), and which amused me to see expressed here. :) The question of moral culpability didn't come up for us in that thread (although it did in the contemporary thread discussing Joshua's belief in the necessity of God), but I've certainly expressed before the idea that selfhood is a necessary fiction that's mainly used for diagnostic purposes.
« Last Edit: May 03, 2023, 09:39:33 PM by Tom »

Fenring

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #48 on: May 04, 2023, 12:25:39 AM »
Hm. But if you are in fact arguing that the book is conscious, whereas the robot is biased in assuming the non-human element of the system is conscious (and that the human element is not), I see a bit of an asymmetry in play. I guess I didn't pick up on the joke that you think is obvious, since I took it for granted that the consciousness of the book was a given (in the logic of the comic) and that the punchline was about the moral frailty of humans. For instance, if asking "can robots be moral" I suppose it could be an amusing turnaround for a robot to ask "can you?" That would be a broad 'touche', if a bit sardonically pessimistic.

I see what you mean, though, about the thought experiment potentially having a boomerang effect on its intended purpose. I personally don't see it as doing that, but perhaps only because I see it as not even valuable enough to provide a useful analogy that can be extrapolated from. Bottom line we're stuck at the definition of "consciousness", which must come prior to asking whether some particular system satisfies an undefined term. I think it's not too hard to agree with you that the 'book' in question would be incredibly complex and capable of interesting operations, but whether those have anything to do with consciousness seems to me an entirely other matter. It may, for instance, be possible to be intelligent without being conscious, and vice versa. I would refer to Frank Herbert's Destination: Void as an example of a book dealing with the question of what it means to be conscious, and whether there are degrees of that.

Tom

  • Members
    • View Profile
Re: Why I hate the Chinese Room Argument
« Reply #49 on: May 04, 2023, 07:34:46 AM »
I think I defined a mind in this thread as something able to construct and operate a model of reality that includes itself, and consciousness as the state awareness of that mind.