Phi 109
Spring 2016
(Site navigation is not working.)
Phi 109 S16
Reading guide for Tues. 4/19: Gibbard and Varian, “Economic Models”—on JSTOR at 2025484

Allan Gibbard (1942–) is a philosopher and Hal Varian (1947-) is an economist. The nature of economic models a standard topic in the philosophy of economics, and the views of the economist Milton Friedman (1912-2006) that they mention on p. 671 represent a frequently-discussed position concerning it.

In the introductory section (pp. 664–6), notice the distinctions between econometric and theoretical models, between theoretical models that are ideal and descriptive, and between descriptive models that are approximations and caricatures. The latter distinction represents one of the main topics of the article.

§I (pp. 666–8) notes quite a number of terms that will be used in the rest of the article. Most of these are defined there, but, when defining ‘model’, Gibbard and Varian assume familiarity with two term from logic, ‘predicate’ and ‘quantifier’. All you really need to know about these ideas is that they are used to specify the logical form of statements in a way that determines what statements follow from others, and the actual content of the statements may be settled by adding further information about these elements of form, information provided by the “story” Gibbard and Varian speak of. (If you are curious, you can find a brief account of predicates, quantifiers, and some of the ideas related to them below.)

§II–III (pp. 668–73). These sections concern the idea of models as approximations. Notice (i) the contrast with what Gibbard and Varian call the “naive view” (pp. 668–9), (ii) their description of models as approximations and evidence for them (pp. 669–70), (iii) the contrast with Friedman’s view (p. 671), and (iv) their distinction between econometric and “casual” application (pp. 672–3). You’ve seen the general idea of approximation show up in discussions of other social sciences from Mill on; think how much of that discussion concerns something analogous to what Gibbard and Varian have in mind (even if there are no mathematically stated models in question).

§IV–VI (pp. 673–7). These two sections (there is no §V) focus on caricatures and their difference from approximations. The idea of caricatures is one of the more original features of this article and the chief reason I assigned it. How is the understanding derived from distortion related to that associated with at least approximate accuracy? Would something like the use of caricatures make sense in less mathematical social sciences?

Predicates and quantifiers

You can think of a predicate as a place-holder for a sentence with a certain number of blanks, so a predicate with a given number of blanks stands for a way of saying something about that many things. For example, a predicate with one blank can express a property a single thing might have or not have (and is thus similar to a grammatical predicate) while a predicate with two blanks can express a relation between two things. A predicate with one blank is interpreted by specifying the collection of things it is true of, its extension. The extension used to interpret a predicate with two blanks will be a collection of pairs of things (and so on for predicates with even more blanks).

Quantifiers are expressions, corresponding to ‘everything’ and ‘something’ in English, that are used to state generalizations or claims of existence. A quantifier is interpreted by specifying the domain of the quantifier—i.e., by saying what are to count as “things.” The domains of quantifiers will be broad collections of things of interest in a theory (often the same one for all quantifiers), with more specific generalizations and claims of existence stated using both quantifiers and predicates.

The point of these ideas is that deductive relations among statements can be understood by analyzing those statements in terms of predicates, quantifiers, and a small number of connectives (corresponding to words like ‘not’, ‘and’, and ‘or’). The structure exposed by this sort of analysis determines what theorems follow from given postulates in a purely formal way—i.e., without reference to particular intepretations of the predicates and quantifiers.