Although Bas van Fraassen (1941-) is of a different generation from Wesley Salmon (1925-2001), their ideas about explanation developed around the same time, and they reacted to each other’s views.
• The first part of the article (§§I-II.3, pp. 264-268), like the last part of Salmon’s, gives a survey of views of explanation. The most important things for our purposes are the examples that begin to accumulate and the “false ideals” he mentions in §I.
• The latter part of Wednesday’s assignment, §§II.4-6 (pp. 268-271), begins to direct attention to the issues that will figure in van Fraassen’s own account, the asymmetry of explanation and the fact that requests for explanation may be legitimately rejected.
• The second part of van Fraassen’s article presents his own view of explanation. In it, he treats issues raised in the first part of the article in reverse order: §III addresses the problems discussed in §II, and §IV returns to the issues mentioned in §I.
Some ideas and notation concerning probability. In a couple of places in the first part of the paper, van Fraassen uses a common notation for claims of “conditional probability.” In general, “P(A / B)” stands for “the probability of A given B.” This is the probability A has when we know that B occurs. For example, the un-conditional probability that a fair six-sided die lands showing an odd number is 1/2—i.e., P(die shows odd no.) = 1/2—because half its sides show odd numbers. But the probability that it shows an odd number given that it shows a number less than 4 is different: P(die shows odd no. / die shows number less than 4) = 2/3 since 2 of the 3 numbers less than 4 are odd.
This notation shows up in van Fraassen’s discussion of the “common cause principle,” an idea due to Hans Reichenbach (1891-1953—both Hempel and Salmon had him as a teacher). The idea concerns cases of “statistical relevance” between simultaneous events; and, when both A and B have probabilities greater than 0, that idea can be expressed in any of the following ways:
P(A / B) | ≠ P(A) |
P(A & B) | ≠ P(A) × P(B) |
P(B / A) | ≠ P(B) |
The equation in the center says that A and B are not independent events since, if they were independent, the probability that both occur—i.e., P(A & B)—would be the product of the probabilities that each does. The equations above and below this convey the same idea by saying that knowledge that one of the events occurs affects the probability of the other. The principle of the common cause then says, roughly, that when simultaneous events A and B are statistically relevant, there must be an earlier event C such that knowledge that C occurs renders A and B independent—i.e., C is such that any of the following holds:
P(A / B & C) | = P(A / C) |
P(A & B / C) | = P(A / C) × P(B / C) |
P(B / A & C) | = P(B / C) |
Such a C is said to “screen off” B from A (or A from B) because the knowledge that C occurs keeps added knowledge that one of A and B occurs from affecting the probability that the other does. In a standard example, the correlation between a fall in a barometer and a storm has as its common cause the atmospheric conditions that produce both. Given knowledge of these conditions, adding knowledge that a barometer falls does not alter the probability of a storm.