My essay deals with the issue of real human minds versus machine-minds. One of the classic attempts to show that AI protocols don't produce real thinking and consciousness is John Searle's "Chinese Room." In it, an operator has a mass of notes describing how to output "reasonable sounding" (pass Turing Test) answers in Chinese, to questions in Chinese. Searle and "mysterians" such as myself say, no the CR does not really understand Chinese. (It's a separate issue, which I tackle in my own essay, whether simulating the neurons directly - rather than the overall process - can actually be done to create real consciousness. I say no to that, as well.) I thought of some further refinements about the Chinese Room which readers should fine interesting. Anyone heard of anything like the "Addition Room" below?
Searle says, the CR doesn't really "understand" Chinese because of course, he means something more than just producing the output - why else emphasize the alternative way the CR does its job? (Again, it is "the system," the virtual mind, that is claimed not to understand. We knew the human operator doesn't know Chinese - let's put that straw man to rest.) His critics say, yes it does, because they (in simple essence) define "understanding" as performance. Searle's objections have little chance against a near tautology, built for convenience rather than insight. Any challenge involving "it produces X, but ...." will be taken as "understanding," yet the "but" will be ignored. Even Ned Block's similar "blockhead" xxx is absorbed. But being able to do things is ... being able to do them. You can't force coverage or co-option of other intended meanings or phenomena through announcement or circular definitions. Instead, let's reconsider afresh whether such behavior should always be taken as "understanding."
First, it is too easy to let the CR give only generic answers. Ask the CR personal questions about itself, and seek elaboration, such as (in translation): "what is your favorite color? Are your feelings easily hurt? Do you approve of GMOs? Do you have a religious faith? What were your unspoken thoughts a minute ago? Imagine an animal, what does it look like?" Ah, now what? Answers would be lies in effect, or at least empty falsehoods. The process designer needs to construct a plausible but imaginary subjective "self," with a history, to go with "understanding Chinese." Isn't that more to do, with deeper implications? Who decides what is credible - the educated public, or canny psychologists? Then, what about describing noises, external things - that can't be programmed into the CR. Understanding Chinese means being able to talk about what you're looking at, or a theorem you just thought of.
The functionalist critique is wearing thin. What if we defenders of the CR say: really understanding Chinese means capability to give honest answers to all questions? (To keep the "game" game, we can exclude direct distinctions etc.) Yes, "how can we tell," but it's also game to pose this conceptual distinction and to show where one is coming from. Perhaps the following is the ultimate refinement: can we teach French to the CR? It seems the programmer would have to include all possible languages, since the CR cannot pick them up "naturally." That final task looks truly undoable, at last. And a real mind that can understand Chinese, can learn French or a newly invented language. This gets sticker and stickier for functionalists once we try harder to make their job harder.
Now consider something simpler and perhaps decisive: the "Addition Room." The AR stores all answers to integer addition questions like, 7 5 = ? (within some range.) It does not do any computation. So, someone inputs A B. The AR operator (a Chinese peasant who never learned Arabic numerals) just looks up the question on a table, finds the answer stored by it, and sends that out. "Look, I put in 7 plus 5 and got 12. This thing can add." Really? Sure it gives you the answer, we stipulated that - but should we accept "addition" being literally defined as just coming up with the answers? How about defining "doing addition" as truly calculating the answer, by computational summing of the inputs. The system does not know how to add. It provides the answer without "doing addition." It comes down to: if either A or B can produce C, I get to pick which of the first I mean by "doing X." (For overall meaning, priority wins.) Now the irony: the AR isn't even a true computational intelligence. How can you agree, "the Addition Room doesn't really do addition"; but say "the Chinese Room really does understand Chinese" - ? Delving deeper, what if we imagined that real computation produced some kind of "experience", that just looking up answers did not?