Brooks is very good about making the structure of his paper clear and telling you what he is up to, so I will simply direct you to some things that might make good points to discuss.
• As §1 makes clear, Brooks’ aim is quite different from that of the work described by Churchland and Sejnowski and different in a way that might make Brooks’ work seem less relevant to understanding human intelligence. But each of the aspects of his work outlined in this section could have implications for issues we have considered. In particular, his doubts about representation might suggest that Foder and Pylyshyn’s concern about the systematicity of connectionist representation is a red herring.
• The criticisms of classical AI and the emphasis on non-verbal intelligence in §§2-3 might suggest a similarity to the connectionists, but note that they too are held up for ridicule in his first story (p. 299). On the other hand, note the support eliminativists could find in his suggestion that our common sense views about even our own perceptual world might be quite wrong (see p. 301 just before §3.1).
• Section 4 is the place to think whether you take his incremental development of “Creatures” to be something that could shed light on human intelligence. Notice also comments about representation he makes along the way.
• Connectionists often describe their systems as having a sort of representation, albeit one that is different from classical systems. Brooks says that his systems are so different that he hesitates to describe them as having representation at all. There are two key ideas to think about when reading his argument for this in §5.
• First, he argues that the decomposition of his robots means that there is no central representation. Try to decide whether you think he is right about this. Also think about his suggestion that central representations might be imputed by observers (see, for example, the beginning of §5.1, p. 304).
• Second, think about Brooks’s suggestion that the world be used as its own model (see, for example, the next to last bullet point before §5.1 on p. 304). This can seem to fly in the face of the traditional suggestion that an important part of intelligence is the ability to think of objects in their absence, and it might seem to make learning and planning difficult.
• Think about these issues as you consider Brooks’s descriptions of his robots in §6. Work through the diagram in Figure 20.1 in as much detail as possible using his description on pp. 306-307.
• In a later version of this paper (published in Haugeland’s Mind Design II), Brooks added a section in which he summarized the key ideas of his work in a series of labels and slogans:
• Situatedness: “The world is its own best model.”
• Embodiment: “The world grounds the regress of meaning-giving.”
• Intelligence: “Intelligence is determined by the dynamics of interaction with the world.”
• Emergence: “Intelligence is in the eye of the beholder.”
The second may have fewer clear ties to what Brooks has said than the others do. To see how it might apply, look back at Haugeland’s discussion of “original intentionality” (pp. 210f).