In another thread, Joshua mentioned the Chinese Room Argument as "proof" that computers cannot contain consciousness. This stuck with me as a minor irritant, because I absolutely hate the Chinese Room Argument -- not as much as I despise Anselm's ontological proof of God, but nearly.
The idea is this: a man sits in a room, alone with some writing utensils and a huge reference book. Every so often, someone shoves a piece of paper under the door into the room; on this paper is written a few characters in Chinese. The man, who does not read Chinese but has near-magical pattern matching and page-turning ability, flips through the book to find a match. The book, being extremely complete, has a match -- and then supplies a few characters as a valid and acceptable response. The man copies those characters onto the bottom of the paper and shoves the note back under the door. The author of the original note, who does read Chinese, reads the response and then sends a new message.
In this way, the man in the room appears to conduct a perfectly intelligible conversation in Chinese (from the perspective of the Chinese-speaking person outside the room), despite not having any real understanding of the language.
The guy who originally penned this thought experiment, John Searle, did so as a response to the Turing Test. Searle points out, correctly (and as Turing himself even explicitly acknowledged), that the Turing Test cannot be a proof of sapience, as at the end of the day it's merely a proof of sufficiently good syntax. Obviously the "book" is an oversimplification, here; it's meant to stand in for whatever language learning a computer possesses that allows it to respond to inputs in what seems to be a coherent way (and avoid, for example, replying the exact same thing to the exact same input every time; a good syntax engine adds randomness and historical memory of past communication in a way a book could not). In and of itself, this is obviously true; a machine that parrots back responses without processing inputs cannot be said to have "understanding."
Searle goes one step farther, though, and argues that his thought experiment proves that no artificial process can produce understanding. This deeply annoys me, because it manages to somehow completely obfuscate the real problem -- which is not the man in the room, but the book. For the book to be sufficiently good, it needs to be generated by someone who for all intents and purposes appears to understand Chinese -- to know that a valid reply to "What is your name?" is "John Lee", and a valid reply thereafter might be "I already told you; I'm John Lee." More importantly, it has to have been written to consistently present a modeled reality; our imaginary John Lee needs to be able to talk about his favorite foods, his pets, his collegiate studies, and what his girlfriend likes to watch on television.
The man in the room doesn't need to know any of this, of course. But the book -- the set of rules that generate the responses supplied to the actual interviewer -- does. And to be really convincing, the book has to incorporate algorithms; it's not enough to simply randomize some responses, of course, so there need to be Choose-Your-Own-Adventure-like paths that keep track of what's previously been said and what the interviewer has said and how our imaginary "John Lee" is feeling about it.
So our book needs both an understanding of syntax and a model of the mind (and the physical universe).
Obviously a man flipping through a book is going to be far, far slower than a computer at following these algorithmic paths. And perhaps this obscures the truth, which I submit is immediately evident: as far as can be established, in the Chinese Room Argument, the book -- if sufficiently convincing -- has a mind.
Now, you might believe that it is impossible to create a book that can supply coherent responses in Chinese consistent with the imaginary life of our fictional "John Lee." And that's fine. But that's not the thought experiment here. The thought experiment is that such a book already exists, but clearly demonstrates that artificial intelligence cannot have a "mind." My own response, and one that I find frustratingly elusive among certain philosophers, is that this clearly demonstrates the flawed definition of "mind" used in the previous case.