The Chinese virtual room
So I've been asked to give my account of the mind-body problem. Kind of a huge question (quick answer: dualism is wrong, the brain creates the mind, and artificial intelligence is at least theoretically possible), but I started writing something a while back about Searle's "Chinese room" argument, and this seems like a good opportunity to finish it.
Many years ago, the philosopher John Searle made a much-discussed argument about what a mind really is, based on the idea of the "Chinese room":
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.The argument was intended to refute computer pioneer Alan Turing's conception of understanding, which was that if a computer could carry on a conversation with a human being such that no one could tell it was a computer, the computer really would understand language and should be considered intelligent. Searle replied that the person in the Chinese room does not understand Chinese, so neither would a computer following a similar set of instructions (i.e., a computer program).
I like the Virtual Mind reply, which is related to the Systems Reply. The Systems Reply is that even though the person in the Chinese room doesn't understand Chinese, the system made up of the room + the person + the rulebook does understand. This is vulnerable to Searle's counterreply, that the person could just memorize the rulebook, yet still wouldn't understand Chinese.
The key to understanding why Searle is wrong there lies in the idea of virtual machines. This is an idea familiar to most computer users: if you have a Mac but want to run Windows-only programs, you run a Windows "emulator" that creates a virtual Windows machine on top of your Mac. Even though the virtual Windows machine is contained within the Mac, it still has states that the Mac does not have. For example, the virtual Windows machine could crash, even though the Mac software underneath is still running fine. Similarly, the "virtual Chinese-speaking mind" that the English speaker implements by memorizing the rulebook can understand Chinese even though the English speaker underneath cannot understand Chinese. (Similarly, the virtual Chinese mind probably doesn't understand English, even though the English mind underneath does.)
An afterthought: a friend of mine recently pointed out that the "Chinese room" is not only wrong, it's also kind of racist and offensive: namely, the selection of Chinese as a language so alien and bizarre that it's the perfect intuition pump for seeing a lack of "understanding." Especially the characterization of Chinese characters as "squiggle squiggle" and "squoggle squoggle." It's part of the whole Orientalist thing, the Asian as the permanent, unassimilably exotic Other. (Related to the all-too-common exchange: "Where are you from?" "New York." "No, where are you really from?" as if a Chinese-American could not be a true New Yorker.) Also especially because the "Chinese room" argument is descended from the "Chinese nation" or "Chinese gym" argument (i.e., if everyone in China made phone calls in a pattern that mimicked the neural state of a human being in pain, would there be a conscious mind created that was in pain?) - which summons up images of robotic, overcrowded hordes of "Chinamen." (Yellow peril, anyone?)
(Or maybe I'm just being paranoid...)