Phi 272
Fall 2013
(Site navigation is not working.)
Phi 272 F13
Reading guide for Fri. 12/13: Daniel Dennett, “In Darwin's Wake, Where Am I?” (13-30)on JSTOR at 3218710

Dennett may be best known for his work on philosophical issues related to the idea artificial intelligence, but he eventually came to write also about evolutionary theory; in this lecture, he combines these two interests. He is very clear about what he is up to, so I won’t say much about the content except to note that most of the lecture can be seen as illustration, explanation, and elaboration of the final paragraph of p. 16 (where he introduces the ideas of “lifting,” “cranes,” and “skyhooks”). My comments below are a few miscellaneous items of background information.

The quotation from Valéry might be translated: “Sometimes I think; and sometimes, I am.” It’s an allusion to Descartes’ “I think; therefore I am” (which is closely related to his idea of himself as a res cogitans, or “thinking thing,” which Dennett mentions later on the first page). The quotation from Picasso on p. 17 might be translated: “I don’t search; I find.”

Dennett employs a number of metaphors and other ideas from work on artificial intelligence (“AI” is the usual abbreviation—“strong AI” is the idea that intelligence in the ordinary sense can really be produced in machines).

One fundamental approach in AI is to solve problems by generating and testing possible solutions. The range of possible solutions can be thought of as a space through which the search is conducted (with the value of each possibility sometimes thought of as altitude in a terrain).

Another way of thinking of these possibilities is as a tree. For example, a move in chess, say, opens various possibilities for the next move, which can be thought of as branching off from it, with possibilities for the move after branching off of each of these branches.

Ways of dropping certain moves from consideration can then be thought of as ways of pruning the tree, something that is crucial given that the number of possibilities can be vast (or even “Vast” in Dennett’s use of the term).

The “Chinese Room” argument compares a computer program for understanding and answering questions in Chinese to the following situation and claims that there is no real understanding of Chinese involved:

Imagine that you are locked in a room, and in this room are several baskets full of Chinese symbols. Imagine that you (like me) do not understand a word of Chinese, but that you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics. So the rule might say: “Take a squiggle-squiggle sign out of basket number one and put it next to a squoggle-squoggle sign from basket number two.” Now suppose that some other Chinese symbols are passed into the room, and that you are given further rules for passing back Chinese symbols out of the room. Suppose that unknown to you the symbols passed into the room are called “questions” by the people outside the room, and the symbols you pass back out of the room are called “answers to the questions.” Suppose, furthermore, that the programmers are so good at designing the programs [i.e., the rules in the rulebook] and that you are so good at manipulating the symbols, that very soon your answers are indistinguishable from those of a native Chinese speaker.

From John Searle, Minds, Brains, and Science (Cambridge: Harvard University Press, 1983).

He goes on to note that he has described the situation in such a way that there is “no way you could learn any Chinese simply by manipulating these formal symbols,” so you will never be able to respond to questions in Chinese except by use of the rule book.