1.4.7. Laws for entailment

Entailment holds in those cases of relative exhaustiveness where there is a single alternative, so a natural place to look for its laws is in the instances of the laws of repetition, cut, and monotonicity for single alternatives. With a minor exception in the case of cut, that is the source of the following laws:

Law for premises. Γ, φ ⇒ φ (for any sentence φ and any sets Γ and Δ);

Law for lemmas. If Γ, φ ⇒ ψ and Γ ⇒ φ, then Γ ⇒ ψ (for any sentence φ and set Γ);

Monotonicity. If Γ ⇒ φ, then Γ, Δ ⇒ φ (for any sentence φ and any sets Γ and Δ).

The law for premises and the law of monotonicity for entailment are simply instances of the laws of repetition and monotonicity for relative exhaustiveness where certain sets have been chosen to be empty or have a single member. No such simple restrictions will convert the cut law into a law for entailment; but, if Γ, φ ⇒ ψ and Γ ⇒ φ then we have Γ, φ ⇒ ψ and Γ ⇒ φ, ψ by applying monotonicity to the second, and an instance of cut will give us Γ ⇒ ψ.

The first law is renamed to reflect the role it will usually play, to justify concluding a premise. The name of the second also refers to its function. The term lemma can be used for a conclusion that is drawn not because it is of interest in its own right but because it helps us to draw further conclusions. This law tells us that if we add to our premises Γ a lemma φ that we can conclude from them, anything ψ we can conclude using the enlarged set of premises can be concluded from the original set Γ. Or, to put it in a way that suggest its relation to monotonicity, we can drop from a set a premises and sentence that is entailed by the rest.

A more direct inverse to monotonicity would be a principle that allowed us to drop any set of premises provided each of its members was entailed by the premises that remain. This is a legitimate principle, and it follows from the law for lemmas in cases where a finite set of premises is dropped. However, rather than stating this generalized form of the law of lemmas, we will consider a related principle that has a slightly different function.

Chain law. If Γ ⇒ ψ for each assumption ψ in Δ and Δ ⇒ φ, then Γ ⇒ φ (for any sentence φ and any sets Γ and Δ).

This follows from the generalized form of the law for lemmas using monotonicity and, in combination with the law for premises, it implies both of them. We refer to this principle as the chain law since it enables us to link valid arguments together to get new valid arguments. If premises Γ enable us to conclude each of the premises Δ of a second argument, then its conclusion follows from Γ directly.

This idea is similar to the idea behind the transitivity of implication (and has that as a special case), and the law for premises is similarly related to the reflexivity of implication. However, these laws for entailment are not directly principles of reflexivity and transitivity since those ideas only make sense for relations between the same sorts of things. Let us define a relation of set entailment by saying that a set Γ entails a set Δ if Γ entails every member of Δ. Set entailment comes to the same thing as relative exhaustiveness when Δ has only one member, but otherwise the two are different. The law for premises tells us that set entailment is reflexive, and the chain law tells us that it is transitive. And the reflexivity and transitivity of set entailment can be shown to give a complete account of the general laws of entailment.

Finally, let us look briefly at the law of monotonicity for entailment. Although it will play only an auxiliary role in our discussion of deductive reasoning, it is a distinguishing characteristic of deductive reasoning that such a principle holds. For, when reasoning is not risk free, additional data can show that a initially well-supported conclusion is false without undermining the original premises on which the conclusion was based. If such further data were added to the original premises, the result would no longer support the conclusion.

Indeed, the risk in good but risky inference can be thought of as a risk that further information will undermine the quality of the inference, so risky inference (or, more precisely, the way the quality of such inference is assessed) is, in general, non-monotonic in the sense that additions to the premises can reduce the set of conclusions that are justified. This is true of inductive generalization and of inference to the best explanation of available data, but the term non-monotonic is most often applied to inferences that are based on features of typical or normal cases. One standard example is the argument from the premise Tweety is a bird to the conclusion Tweety flies. This conclusion is reasonable when the premise exhausts our knowledge of Tweety; but the inference is not free of risk, and the conclusion would no longer be reasonable if we were to add the premise that Tweety is a penguin.

Glen Helman 28 Aug 2008