1.4.2. Laws for entailment

Most of our concern with entailment will not be with particular examples, but instead with general laws. Most of these will be generalizations about specific logical forms, but some very general ones can be stated now (and a few of these appeared already in the exercise 1.1.x.1).

We will begin with single-premised entailment—i.e., with implication. Implication is reflexive in the sense that any sentence φ implies itself, and it is transitive in the sense that, if a sentence χ is implied by a sentence ψ that is in turn implied by a sentence φ, then χ is also implied directly by φ. That is,

φ ⇒ φ; and
if φ ⇒ ψ and ψ ⇒ χ, then φ ⇒ χ

for any sentences φ, ψ, and χ. Notice that the second of these can equally well be described as saying that a sentence χ may be validly concluded from anything φ that implies a premise ψ from which χ may be validly concluded. In short, it tells us that we will not destroy validity if we replace the conclusion of a single-premised valid argument by something it implies, and we may replace the premise by anything that implies it. More graphically,

if φ / ψ is valid and ψ ⇒ χ, then φ / χ is valid; and
if ψ / χ is valid and φ ⇒ ψ and , then φ / χ is valid.

Laws somewhat analogous to reflexivity and transitivity apply to arguments with any sets of premises. What we will call the law for premises says that a sentence is entailed by any set of premises containing it. That is,

Γ, φ ⇒ φ

for any set Γ of sentences and any sentence φ. The analogue of the second law for single-premised arguments says that a set of premises that entails every premise of a valid argument also entails its conclusion: for any sets Γ and Δ and any sentence ψ,

if Γ ⇒ φ for each premise φ in Δ and Δ ⇒ ψ, then Γ ⇒ ψ

We will refer to this as the chain law since it enables us to link valid arguments together to get new valid arguments. These are not directly principles of reflexivity and transitivity since those ideas only make sense for relations between the same sorts of things; but a relation between sets of sentences that holds when Γ entails every member of Δ is reflexive and transitive.

We will consider two further general laws of entailment that follow from the law for premises and the chain law but are each valuable for special purposes. The first tells us that we can add premises without destroying the validity of an argument: for any sets Γ and Δ and any sentence φ

if Γ ⇒ φ, then Γ, Δ ⇒ φ

This law should not be surprising because, in general, the more premises we have, the easier it is to validly conclude a given sentence. If we think of entailment as associating a collection of valid conclusions with any set of sentences, this law tells us that as the set of premises increases the set of valid conclusions will never decrease. Mathematicians apply the term monotonic to situations like this, so we will speak of this law as the principle of monotonicity for entailment.

Although monotonicity will play only an auxiliary role in our discussion of deductive reasoning, it is a distinguishing characteristic of deductive reasoning that such a principle holds. For, when reasoning is not risk free, additional data can show that a initially well-supported conclusion is false—and it can do this without undermining the original on which we based our conclusion. If such further information were added to our premises, we would not expect the conclusion to still be well supported. Indeed, the risk in good but risky inference can be thought of as a risk that further information will undermine the quality of the inference, so risky inference (or, more precisely, the way the quality of such inference is assessed) is, in general, non-monotonic. This is true of inductive generalization and of inference to the best explanation of available data, but the term non-monotonic is most often applied to inferences that are based on features of typical or normal cases. One standard example is the argument from the premise Tweety is a bird to the conclusion Tweety flies. This conclusion is reasonable when the premise exhausts our knowledge of Tweety; but the inference is not free of risk, and the conclusion would no longer be reasonable if we were to add the premise that Tweety is a penguin.

The other side of the coin is that dropping premises can never help in deductive reasoning and may well destroy validity. But, while we cannot in general safely drop premises, we can drop a premise when it is entailed by others that we retain:

if Γ, φ ⇒ ψ and Γ ⇒ φ, then Γ ⇒ ψ

for any set Γ and any sentences φ and ψ. The term lemma can be used for a conclusion that is drawn not because it is of interest in its own right but because it helps us to draw further conclusions. This law tells us that anything we can conclude using an intermediate conclusion φ is a valid conclusion from the original premises Γ, so it justifies the use of lemmas, and we will refer to it as the law for lemmas.

In summary, what these laws tell us about entailment is that (i) we can validly conclude any premise (law for premises), (ii) we can validly conclude anything entailed by valid conclusions from our premises (chain law), (iii) we can add premises without destroying validity (monotonicity), and (iv) we may safely drop from our premises lemmas that are entailed by the remaining premises (law for lemmas).

Glen Helman 25 Aug 2005