Being all deep and stuff: Thoughts on John Searle’s Chinese Room Experiment

posted by Dr. James G. Hood
Friday, May 28, 2010

Minds, Brains, Programs and Reflections

            John R. Searle focuses his discussion on statements concerning strong artificial intelligence and whether a computer with sophisticated programming can have cognition, and that the computer cognition can explain human cognition. To demonstrate the point, Searle constructs a thought experiment in which an English speaking person is put alone in a room with a set of Chinese symbols. With English instructions that provide rules for how to respond to the Chinese symbols, the person can provide the correct answers that are the same as the responses that a native Chinese speaker would give. The person, however, would have no actual understanding of the Chinese language. In the same way, a computer program would have no understanding of its data. The data is only given meaning by the people who input the data or interpret the data outputs. Searle then proceeds to examine the common objections that people make to his thought experiment and refute them. A key characteristic that sets human brains apart from computers is intentionality. Brains can have beliefs, desires, anxieties, and fears that a computer cannot. Searle also points out common misconceptions concerning computers that would lead people to believe that they have cognition and the he also refutes those claims. A point that needs clarity in Searle’s argument is when he discusses the Robot Reply. He discusses someone’s being the homunculus without knowing what is going on. The very conception of a homunculus seems inappropriate and counterproductive. In the brain, there is no homunculus. The brain and its thoughts are simply the sum of neuronal processes. A homunculus creates a dualism when in fact the brain and its functions constitute a single entity.

            The reflections paper critiques many of Searle’s points and clarifies others. First, the argument points out unrealistic assumptions that Searle pays little notice to in his thought experiment. During this time, the author’s comments seem edgy and bordering on ad hominem attacks, especially when the author describes the humans in Searle’s argument as “demons”. However, as the argument progresses, the author puts the “demon” in context and demonstrates that Searle’s argument against the Systems Reply is insufficient because the people in Searle’s thought experiment constitute a system. The author then goes on to discuss the particular difficulties that understanding human thinking (particularly the acquisition of language) poses. In this vein, the author discusses virtual computers and the “levels” and how that is similar to human consciousness. The illustration is somewhat confusing and vague, but so is our understanding of human consciousness. The author presents a particularly strong argument against one of Searle’s claims. Searle asserted that artificial intelligence proponents had an unavoidable dualist perspective. However, this author asserts that all claims in this regard would be a posteriori, which the mind decidedly is not.

Leave a Reply