5.3.1. Conditionalization
The truth conditions of the conditional, which count φ → ψ as true except when φ is T and ψ is F, may have reminded you of the definition of implication, which says that φ implies ψ if and only if there is no possible world in which φ is T and ψ is F. Even though similar, the two ideas are not the same, and the distinction between material implication on the one hand and logical implication on the other points to the difference between them. Saying that a conditional φ → ψ is true rules out only the actual occurrence of the values T for φ and F for ψ while saying that φ logically implies, or entails, ψ rules out the occurrence of this pattern in any possible world. The forecast It will rain tomorrow if the front moves through does not commit a meteorologist to the view that It will rain tomorrow is logically implied by The front will move through tomorrow.
This difference can be brought out in another way. In cases where a relation of entailment holds, the corresponding conditional is not only true but tautologous. For example, because It was hot and humid ⊨ It was hot, the conditional If it was hot and humid, it was hot tells us nothing; it is a tautology. And we can state this as a general principle: φ entails ψ if and only if φ → ψ is a tautology—in notation, φ ⊨ ψ if and only if ⊨ φ → ψ. Either way we are saying that we fail to have φ true and ψ false not merely in the actual world but in all possible worlds.
Since to be a tautology is to be a valid conclusion from no premises at all, the principle just stated provides a partial account of when a conditional is a valid conclusion. To cover cases where there are premises we can use the idea of implication given, or relative to, a set of additional premises. For example, a weather forecaster might say that the passing of a front implies
rain, intending to rest this relation between the passing of the front and rain on certain assumptions about the conditions of the atmosphere and laws of meteorology. And when a scientific hypothesis is said to imply
a certain result for an experimental test, this implication is based on certain assumptions about the behavior of the experimental set up. In such cases we say that a sentence ψ cannot be false when a sentence φ is true, provided that certain further assumptions Γ are true as well. But this is just to say that ψ is entailed by φ taken together with Γ—i.e., that Γ, φ ⊨ ψ. So relative implication is really just entailment with one premise singled out for special attention, something that it is quite reasonable to do when, as in the examples above, the set Γ of further premises is large or lacks definite boundaries.
Another way of singling out one assumption from a group of others is to make the conclusion conditional upon it. For example, we might say that, based on certain assumptions about the weather, we can conclude that it will rain if the front passes or that, based on assumptions about the experimental set up, we can conclude that an experiment will yield a certain result if our hypothesis is true. But this way of giving special attention to one of a group of assumptions is equivalent to making a claim of relative implication—that is, a conditional is a valid conclusion from given premises if and only if its antecedent implies its conclusion given those premises. And this gives us our account of conditional conclusions:
Law for the conditional as a conclusion. Γ ⊨ φ → ψ if and only if Γ, φ ⊨ ψ (for any set Γ and any sentences φ and ψ).
To see the truth of this law, note that an entailment Γ ⊨ φ → ψ will hold if and only if there is no possible world in which φ → ψ is false while all members of Γ are true. But the sort of possible world that this rules out is one in which ψ is false while φ and the members of Γ are all true—i.e., one which is a counterexample to the argument Γ, φ / ψ. And to rule out such a possibility is to say that Γ, φ ⊨ ψ.
Reading the law above from right to left, we move a premise past the sign ⊨, making the conclusion conditional on it. We will use the term conditionalization for this operation. Any result of the process is a conditionalization of the argument, and we will sometimes say, more specifically, that it is a conditionalization on the premise that is moved.
The law for the conditional as a conclusion tells us that an argument Γ / φ → ψ is valid if and only if the argument Γ, φ / ψ is valid. Moving from the first argument to the second will lead us to consider the latter argument in cases where we do not know the premise φ to be true. In such cases, Γ, φ / ψ will be an argument concerning a hypothetical situation, a hypothetical argument in the sense introduced in 4.2.2. Modifying an example used there, we can see the validity of the argument at the left below by noting the validity of the one at the right.
Ann and Bill were not both home without the car being in the driveway
The car was not in the driveway
If Ann was at home, Bill wasn’t |
Ann and Bill were not both home without the car being in the driveway The car was not in the driveway Ann was at home Bill wasn’t at home |
The first argument is a conditionalization of the second, and the law for the conditional as a conclusion tells that the first is valid if and only if the second is. Someone who offers the first argument is unlikely to know whether or not Ann was at home because there would then be no reason to assert a merely conditional conclusion. Consequently, Ann was at home describes a situation the arguer will regard as hypothetical, and the second argument can be described as a hypothetical argument. This means that we establish conditionals the way we established disjunctions in the last chapter, as compounds that serve to state categorically the upshot of a hypothetical argument.
In derivations, we can plan for a goal that is a conditional by setting out to reach it by a hypothetical argument. The rule embodying this approach, Conditional Proof (CP), is shown in Figure 5.3.1-1.
|
→ |
|
Fig. 5.3.1-1. Developing a derivation by planning for a conditional at stage n.
When we apply CP, we add the antecedent of the conditional goal as a supposition and set its consequent as a new goal. We thus plan to carry out, in a vertical direction, the transition indicated by the arrow in the conditional.
As an example, here is a derivation for the argument above.
│¬ ((A ∧ B) ∧ ¬ C) | 2 | |
│¬ C | (2) | |
├─ | ||
││A | (3) | |
│├─ | ||
2 MPT | ││¬ (A ∧ B) | 3 |
3 MPT | ││¬ B | (4) |
││● | ||
│├─ | ||
4 QED | ││¬ B | 1 |
├─ | ||
1 CP | │A → ¬ B |
Notice that the proximate argument of the gap after CP is applied is ¬ ((A ∧ B) ∧ ¬ C), ¬ C, A / ¬ B. That is, the ultimate argument of the derivation is a conditionalization on A of the proximate argument that results from CP. In short, when we apply CP, we plan to put ourselves in a position to conditionalize.
Of course, whenever we have premises, we are in a position to conditionalize, and the validity of the argument we have just considered establishes the validity of the result of conditionalization on its second premise: ¬ ((A ∧ B) ∧ ¬ C) / ¬ C → (A → ¬ B). This argument might be put into English as follows:
Ann and Bill were not both home without the car being in the driveway
Unless the car was in the driveway, Bill wasn’t home if Ann was
A derivation for it will incorporate the derivation above, preceded by an initial use of CP.
│¬ ((A ∧ B) ∧ ¬ C) | 3 | |
├─ | ||
││¬ C | (3) | |
│├─ | ||
│││A | (4) | |
││├─ | ||
3 MPT | │││¬ (A ∧ B) | 4 |
4 MPT | │││¬ B | (5) |
│││● | ||
││├─ | ||
5 QED | │││¬ B | 2 |
│├─ | ||
2 CP | ││A → ¬ B | 1 |
├─ | ||
1 CP | │¬ C → (A → ¬ B) |
After stage 2, we are making two suppositions—that the car is not in the driveway and that Ann is home—and we are thus considering a situation that is doubly hypothetical. And, in general, the most natural way of establishing the validity of a doubly conditional conclusion is by way of such a doubly hypothetical argument.