5.3.1. Conditionalization

The truth conditions of the conditional, which count φ → ψ as true except when φ is T and ψ is F, may have reminded you of the definition of implication, which says that φ implies ψ if and only if there is no possible world in which φ is T and ψ is F. Of course, there is a difference between the two, and we have used the distinction between material implication on the one hand and logical implication on the other to point to this difference. To say that a conditional φ → ψ is true is to rule out only the actual occurrence of the values T for φ and F for ψ while to say that φ logically implies, or entails, ψ is to rule out the occurrence of this pattern in any possible world. Given current conditions, a weather forecaster might assert If the front moves through tomorrow, it will rain but no one would claim that The front will move through entails It will rain, that it is logically impossible for the front to move through without it raining.

This difference can be brought out in another way. In cases where a relation of entailment holds, the corresponding conditional is not only true but tautologous. For example, because It was hot and humidIt was hot, the conditional If it was hot and humid, it was hot tells us nothing; it is a tautology. And we can state this as a general principle: φ entails ψ if and only if φ → ψ is a tautology—in notation, φ ⇒ ψ if and only if ⇒ φ → ψ. Either way we are saying that we fail to have φ true and ψ false not merely in the actual world but in all possible worlds.

Since to be a tautology is to be a valid conclusion from no premises at all, the principle just stated provides a partial account of when a conditional is a valid conclusion. To cover cases where there are premises we use the idea of implication relative to a set of premises. For example, a weather forecaster might say that the passing of a front implies rain, intending to rest this relation between the passing of the front and rain on certain assumptions about the conditions of the atmosphere and laws of meteorology. And when a scientific hypothesis is said to imply a certain result for an experimental test, this implication is based on certain assumptions about the behavior of the experimental set up. In such cases we say that a sentence ψ cannot be false when a sentence φ is true, provided that certain further assumptions Γ are true as well. But this is just to say that ψ is entailed by φ taken together with Γ—i.e., that Γ, φ ⇒ ψ. So relative implication is really just entailment with one premise singled out for special attention, something that it is quite reasonable to do when, as in the examples above, the set Γ of further premises is large or lacks definite boundaries.

Another way of separating one assumption from a group of others is to make the conclusion conditional upon it. For example, we might say that, based on certain assumptions about the weather, we can conclude that it will rain if the front passes or that, based on assumptions about the experimental set up, we can conclude that an experiment will yield a certain result if our hypothesis is true. But these two ways of giving special attention to one of a group of assumptions are equivalent—that is, a conditional is a valid conclusion from given premises if and only if its antecedent implies its conclusion relative to those premises. And this gives us our law for the conditional as a conclusion:

Γ ⇒ φ → ψ if and only if Γ, φ ⇒ ψ.

To see the truth of this law more formally, note that an entailment Γ ⇒ φ → ψ will hold if and only if there is no possible world in which φ → ψ is false while all members of Γ are true. But the sort of possible world that this rules out is one in which ψ is false while φ and the members of Γ are all true; and to rule out such a possibility is to say that Γ, φ ⇒ ψ.

Reading the law from right to left, we move a premise past the sign ⇒, making the conclusion conditional on it. We will use the term conditionalization for this operation. Any result of the process is a conditionalization of the argument, and we will sometimes say, more specifically, that it is a conditionalization on the premise that is moved.

The law for the conditional as a conclusion tells us that an argument Γ / φ → ψ is valid if and only if the argument Γ, φ / ψ is valid. This will lead us to consider the latter argument in cases where we do not know the premise φ to be true. In such cases, Γ, φ / ψ will be an argument concerning a hypothetical situation, a hypothetical argument in the sense introduced in 4.2.2. To illustrate this, we will modify an example used there. We can see the validity of the argument

Ann and Bill were not both home without the car being in the driveway
The car was not in the driveway
If Ann was at home, Bill wasn’t

by noting the validity of

Ann and Bill were not both home without the car being in the driveway
The car was not in the driveway
Ann was at home
Bill wasn’t at home.

The first argument is a conditionalization of the second, and the law for the conditional as a conclusion tells that the first is valid if and only if the second is. Someone who offers the first argument is unlikely to know whether or not Ann was at home because there would then be no reason for asserting a merely conditional conclusion. Consequently, Ann was at home describes a situation the arguer will regard as hypothetical, and the second argument can be described as a hypothetical argument. This means that we establish conditionals the way we established disjunctions in the last chapter, as compounds that serve to state categorically the upshot of a hypothetical argument.

We apply this idea in derivations when we plan for a goal that is a conditional by setting out to reach it by a hypothetical argument. Our rule embodying this approach is Conditional Proof (CP); it is shown in Figure 5.3.1-1.

│...
││...
││
││
││
││
││
││
│├─
││φ → ψ
│...
│...
││...
││
│││φ
││├─
││
││├─
│││ψ n
│├─
n CP ││φ → ψ
│...

Fig. 5.3.1-1. Developing a derivation by planning for a conditional at stage n.

When we apply CP, we add the antecedent of the conditional goal as a supposition and set its consequent as a new goal. We thus plan to carry out, in a vertical direction, the transition indicated by the arrow in the conditional.

As an example, here is a derivation for the argument above.

│¬ ((A ∧ B) ∧ ¬ C) 2
│¬ C (2)
├─
││A (3)
│├─
2 MPT ││¬ (A ∧ B) 3
3 MPT ││¬ B (4)
││●
│├─
4 QED ││¬ B 1
├─
1 CP │A → ¬ B

Notice that the proximate argument of the gap after CP is applied is ¬ ((A ∧ B) ∧ ¬ C), ¬ C, A / ¬ B. That is, the ultimate argument of the derivation is a conditionalization on A of the proximate argument that results from CP. In short, when we apply CP, we plan to put ourselves in a position to conditionalize.

Of course, whenever we have premises, we are in a position to conditionalize, and the validity of the argument we have just considered establishes the validity of the result of conditionalization on its second premise: ¬ ((A ∧ B) ∧ ¬ C) / ¬ C → (A → ¬ B). This argument might be put into English as follows:

Ann and Bill were not both home without the car being in the driveway
Unless the car was in the driveway, Bill wasn’t home if Ann was home

A derivation for it will incorporate the derivation above, preceded by an initial use of CP.

│¬ ((A ∧ B) ∧ ¬ C) 3
├─
││¬ C (3)
│├─
│││A (4)
││├─
3 MPT │││¬ (A ∧ B) 4
4 MPT │││¬ B (5)
│││●
││├─
5 QED │││¬ B 2
│├─
2 CP ││A → ¬ B 1
├─
1 CP │¬ C → (A → ¬ B)

After stage 2, we are making two suppositions—that the car is not in the driveway and that Ann is home—and we are thus considering a situation that is doubly hypothetical. And, in general, the most natural way of establishing the validity of a doubly conditional conclusion is by way of such a doubly hypothetical argument.

Glen Helman 25 Aug 2005