3.2.2. Drawing negative conclusions

The law for negation as a conclusion

Γ ⊨ ¬ φ if and only if Γ, φ ⊨ ⊥

describes the conditions under which an entailment of the form Γ ⊨ ¬ φ holds. An example may help in thinking about this law. The argument

Ann and Bill were not both home without the car being in the driveway

Ann was home but the car was not in the driveway


Bill was not home

is valid and seeing that it is valid comes to the same thing as seeing that Bill could not have been home if the premises are true. But to see this is to see that the claim Bill was home is excluded by the premises of the argument. So the negative conclusion of this argument is valid because the conclusion denies something that is excluded by the premises.

This connection between validity and exclusion is just what the law for negation as a conclusion states. For a reductio entailment Γ, φ ⊨ ⊥ is the way we capture exclusion in terms of entailment: Γ excludes φ if adding φ to Γ would enable us to reach an absurd conclusion. And the law tells us that we can validly conclude a negation ¬ φ when we can reduce to absurdity the result of adding φ to the premises Γ. So we can say that the example above is valid because due to the inconsistency of the three sentences

Ann and Bill were not both home without the car being in the driveway

Ann was home but the car was not in the driveway

Bill was home

We can reduce these claims to absurdity by noting that the second and third together imply Ann and Bill were both home without the car being in the driveway and that this is what the first denies.

Although this reduction to absurdity shows the inconsistency of the full set of sentences from which we draw the absurd conclusion, we focus attention on the last one of them to draw a negative conclusion from the first two. And in general, the entailment Γ, φ ⊨ ⊥ shows the inconsistency of the full set containing the members of Γ together with φ, but we focus attention on φ when we say it is excluded by, or is inconsistent with, Γ. We can focus attention on a single sentence when speaking of reduction to absurdity itself by saying that the argument Γ, φ / ⊥ reduces φ to absurdity given Γ. And this allows us to restate the law for negation as a conclusion in another way: we can validly conclude a negation ¬ φ from premises Γ when we can reduce φ to absurdity given the premises Γ. In the example above, we reduced Bill was home to absurdity given the two premises of the original argument.

As a rule for sequent proofs, the principle for negation as a conclusion would take the following form:

Γ, φ ⊨ ⊥
neg. as concl.

Γ ⊨ ¬ φ

And this shows the effect we want a corresponding derivation rule to have on a gap in a derivation. There is no branching, but a premise is added in moving up from the bottom and the required conclusion is strengthened to ⊥.

To implement this idea in derivations, we must add φ as a further resource in the child gap. Unlike resources added through Ext, this added resource will generally go beyond information contained in the premises. It is a genuine addition to the claims made by the premises, amounting to a further assumption for the purposes of the argument. Such further assumptions are often called suppositions and the verb suppose is used to introduce them when putting this sort of deductive reasoning into words. Suppositions can have a variety of roles in deductive reasoning. In the rules Lemma and LFR of 2.4, a lemma is introduced as a supposition in one gap of a derivation. In those rules this supposition represents a resource that we have on loan, a loan that is paid if we are able to prove the lemma in another gap. When we suppose φ in order to prove ¬ φ, we make the supposition in order to refute it by reducing it to absurdity. That is, we make the supposition in order to consider a possibility, and we go on to rule out the possibility on the basis of other assumptions. We will encounter still other uses of suppositions in later chapters.

The rule that implements this idea in derivations will be called Reductio Ad Absurdum or RAA. It is shown in Figure 3.2.2-1.

│⋯
││⋯
││
││
││
││
││
││
│├─
││¬ φ
│⋯
│⋯
││⋯
││
│││φ
││├─
││
││├─
│││⊥ n
│├─
n RAA ││¬ φ
│⋯

Fig. 3.2.2-1. Developing a derivation by planning for a negation at stage n.

This rule leads us to develop a gap by adding a supposition and, at the same time, changing our goal to ⊥. The part of the derivation these changes affect is marked by a scope line, and the added resource is marked off at the top by a horizontal line.

If we state this rule for tree-form proofs, it takes the following form (which you should compare to the analogous diagram for the rule Lem of 2.4.1):

φ
RAA

¬ φ

This shows a pattern of argument in which we conclude ¬ φ from the premise ⊥. But that description would apply also to the rule EFQ, so it does not capture all that is going on here. The conclusion ¬ φ is, in general, weaker than ⊥. And the rule for negation as a conclusion tells us that the particular way it is weaker licenses us to drop φ from our assumptions. Since the conclusion rules out no case where φ is false, it need no longer depend on an assumption φ that rules out such cases.

As with other rules, the form of RAA in tree-form proofs explains the numerical annotations for it in derivations. A stage number is placed to the right of ⊥, since it is the true premise from which ¬ φ is concluded. The supposition φ is not a premise but plays a different role so no stage number is added to its right (though one might appear later if it is exploited inside the gap as it develops further). The fact that the supposition is discharged when we draw a conclusion from ⊥ is shown in the derivation simply by the fact that its scope line ends with ⊥.

Once we have begun a reductio argument, we have ⊥ as our goal and we must look for ways of reaching it. The only way we have in our rules so far is QED, but that requires that we have ⊥ among our resources. While it is, of course, possible that our new supposition is ⊥ or that ⊥ was already among our resources, we would not expect this to happen in general. Usually, we will need to make use of both the supposition and the pre-existing resources and make use of some negative claims among them. Our full discussion of the use of negative resources will come only in 3.3, but the core principle for using such resources is one we can consider now.

One of the traditional laws of logic is the law of non-contradiction. This is sometimes referred to also as the law of contradiction when the focus is simply on the fact that it is a law for concluding something from a contradictory pair rather than the fact that what we conclude is that they cannot be both true. We know it as the principle that ¬ φ and φ are mutually exclusive—or, in the form most relevant at the moment, that ¬ φ, φ ⊨ ⊥.

This idea lies behind a pattern of argument that we will call Non-contradiction or Nc:

¬ φ
φ
Nc

This pattern of argument will appear in derivations as a way of completing a reductio argument:

│⋯
│¬ φ [available]
│⋯
│φ [available]
│⋯
││⋯
││
│├─
││⊥
│⋯
│⋯
│¬ φ (n)
│⋯
│φ (n)
│⋯
││⋯
││
│├─
n Nc ││⊥
│⋯

Fig. 3.2.2-2. Closing the gap of a reductio argument one of whose resources negates another.

Notice that, as with other rules that close gaps, the resources required to apply this need only be available and they are marked with parenthesized stage numbers. The latter point is moot, as it was with QED and EFQ, since the gap closes. And, in a way, the possibility of using available but inactive resources is moot also. Once we have the further rules of 3.3, we will need this rule only when φ is an unanalyzed component (though it will be usable and useful in other cases, too). And we will never have rules for exploiting unanalyzed components or their negations, so such resources will be active whenever they are available.

Glen Helman 01 Aug 2011