I’ve assigned this article for the sake of the last half of it. So, while the first half is worth reading (it will address criticisms made by people interested in the history of science, an issue mentioned in Giere’s first paragraph), it’s fine to read it quickly.
• Salmon’s discussion of the hypothetico-deductive method beginning on p. 75 raises some of the same issues as Giere but from the point of view of someone far less sympathetic to that approach to confirmation. (Salmon’s comment that the H-D account is “woefully inadequate,” p. 76, nicely matches Giere’s comment, regarding views like ones Salmon advocates later, that “no [such] model of science … will prove adequate,” p. 277.) We won’t be able to spend much time on this discussion, but do note the diagram on p. 76; this picture of deduction from a hypothesis will be useful to have in mind at a number of points later in the course.
• The remainder of §2 (pp. 77-80) is where Salmon presents the Bayesian approach to confirmation. This is likely to occupy the bulk of our time on Salmon, and I’ve added a few explanatory notes on his discussion to the end of this guide.
• At the end of §2 and in §3, Salmon addresses the role of something like prior probabilities in a variety of views about science. He begins and ends with the idea of an ascription of plausibility to hypotheses that both helps to explain and is grounded in the historical development of science. In between, he considers the place of prior probabilities in a variety of positions on science. In the course of this he sketches interpretations of probability—the “logical probability” of people like Carnap, the “frequency theory” that Salmon supports, and the “subjective” or “personal” probability favored by many Bayesians. Along with the idea of “propensity” noted by Giere (on his p. 279), these form the key recent ways of understanding what probability is.
Some notes on Salmon’s discussion of Bayes’s theorem
The two equations below restate Salmon’s equations on p. 78 with several changes. First, I’ve relabeled his letters A, B, and C to K, H, and O, respectively (for “background knowledge,” “hypothesis,” and “observation”) to suggest the role of these terms in the case of theoretical confirmation. Second, I’ve moved K (i.e., his A) from the first argument to a subscript; the quantity PK obeys the same laws as P (though, of course, their values are likely to be different—e.g., PK(H,O) = P(K & H,O) ≠ P(H,O) unless K is not statistically relevant to O given H). Finally, I’ve used the first equation to state the denominator of the second in a simpler way.
Salmon uses the more complicated denominator in his statement of Bayes’s theorem (his 2nd equation) to show that the posterior probability PK(O,H) depends on only the prior probability PK(H) and the two likelihoods PK(H,O) and PK(~H,O). The dependence on these three factors can be seen even more clearly by considering Bayes’s theorem for odds,* something that is possible whenever neither the prior probability of ~H nor the probability of O given ~H is 0. Under these conditions, the theorem for odds is this:
(which results from dividing the probability theorem for H by the same theorem for ~H). This form of the theorem permits the simple statement that the posterior odds are the result of multiplying the prior odds by the ratio of the likelihoods.†
*If you are not used to thinking in terms of odds, simply note that odds are related to probabilities by the following equations:
oddsK(H) = | PK(H) 1−PK(H) | oddsK(O,H) = | PK(O,H) 1−PK(O,H) |
PK(H) = | oddsK(H) oddsK(H) + 1 | PK(O,H) = | oddsK(O,H) oddsK(O,H) + 1 |
so a probability of 75% amounts to odds of 3 to 1, and odds of 2 to 3 amount to a probability of 40%.
†Of course, the ratio of the likelihoods PK(H,O) and PK(~H,O) also determines the change from prior to posterior probabilities, but this change is a varying factor. Small probabilities increase only a little less than the corresponding odds do if the increase in odds is not too great. A probability of 1% amounts to odds of 1 to 99, and doubling those odds to 2 to 99 yields a probability of 2/101 = 1.98%. However, even large increases in odds have little effect on the corresponding probabilities if those are already large. A probability of 99% amounts to odds of 99 to 1. Increasing those odds more than 10-fold to 999 to 1 increases the probability only marginally to 99.9%