Phi 270
Fall 2013
(Site navigation is not working.)

1.4.7. Laws for entailment

Most of the laws of deductive reasoning we will study will be generalizations about specific logical forms that will be introduced chapter by chapter, but some very general laws can be stated at this point. We have already seen some of these. We have just seen the laws tying inconsistency to Absurdity alternatives to assumptions. And the principles of reflexivity and transitivity for implication discussed in 1.2.3 can be generalized to provide basic laws for entailment and relative exhaustiveness. However, we will look only at the case of entailment.

Two basic laws suffice to capture the basic properties of entailment considered in its own right:

Law for premises. Any set of assumptions entails each of its members. That is, Γ, φ ⊨ φ (for any sentence φ and any set Γ).

Chain law. A set of assumptions entails anything that is entailed by things the set entails. That is, if Γ ⊨ φ for each assumption φ in Δ and Δ ⊨ ψ, then Γ ⊨ ψ (for any sentence ψ and any sets Γ and Δ).

Think about the relation which holds between sets Γ and Δ when Γ entails all members of Δ. Although a relation between sets, this is different from relative exhaustiveness because it says that the cumulative content of Δ (and not merely its shared content) is included in the cumulative content of Γ. That is, we are looking at both Γ and Δ as sets of assumptions. The laws above tell us that this relation is both reflexive and transitive. For the law for premises tells us that any set entails every member of itself. And, if Γ entails every member of Δ and Δ entails every member of Ξ, then Γ also entails every member of Ξ by the chain law. (On the other hand, relative exhaustiveness is neither reflexive nor transitive.)

The two principles above have as a consequence two further principles that concern the addition and subtraction of assumptions and will play an important role in our study of entailment:

Monotonicity. Adding assumptions never undermines entailment. That is, if Γ ⊨ φ, then Γ, Δ ⊨ φ (for any sets Γ and Δ and any sentence φ).

Law for lemmas. Any assumption that is entailed by other assumptions may be dropped without undermining entailment. That is, if Γ, φ ⊨ ψ and Γ ⊨ φ, then Γ ⊨ ψ (for any sentence φ and set Γ).

Each of these principles is based on both the law for premises and the chain law. In the case of the first, the law for premises tells us that Γ together with Δ entails every member of Γ alone, so if we also know that Γ ⊨ φ, the chain law tells us that Γ, Δ ⊨ φ. The assumption of the second principle that Γ ⊨ φ combines with the law for premises to tell us that Γ entails every member of the set of assumptions consisting of Γ together with φ, and the chain law then tells us that Γ alone entails anything ψ that is entailed by this enlarged set of assumptions.

The term lemma can be used for something that we conclude not because it is of interest in its own right but because it helps us to draw further conclusions. The second law tells us that if we add to our premises Γ a lemma φ that we can conclude from them, anything ψ we can conclude using the enlarged set of premises can be concluded from the original set Γ.

The idea behind the law of monotonicity is that adding assumptions can only make it harder to find a possible world that separates the assumptions from the conclusion, so, if no possible world will separate Γ from φ, we can be sure that no world will separate from φ the larger set of assumptions we get by adding some further assumptions Δ. The term monotonic is applied to trends that never change direction. More specifically, it is applied to a quantity that does not both increase and decrease in response to changes in another quantity. In this case, it reflects the fact that adding assumptions will never lead to a decrease in the range of conclusions that are valid.

It is a distinguishing characteristic of deductive reasoning that a principle of monotonicity holds. For, when reasoning is not risk free, additional data can show that a initially well-supported conclusion is false and do so without undermining the original premises on which the conclusion was based. But then, if such further data were added to the original premises, the resulting enlarged set of assumptions would no longer support the conclusion. This means that risky inference is, in general, non-monotonic in the sense that additions to the premises can reduce the set of conclusions that are justified.

This is true of inductive generalization and of inference to the best explanation of available data, but the term non-monotonic is most often applied to another sort of non-deductive inference, an inference in which features of typical or normal cases are applied when there is no evidence to the contrary. One standard example is the argument from the premise Tweety is a bird to the conclusion Tweety flies. This conclusion is reasonable when the premise exhausts our knowledge of Tweety; but the inference is not free of risk, and the conclusion would no longer be reasonable if we were to add the premise Tweety is a penguin.

The law for premises and the chain law can be shown to give a complete account of the general laws of entailment in the sense that any relation between sets of sentences and sentences that obeys them is an entailment relation for some set of possibile worlds and assignment of truth values to sentences in each world. But this is not to say that they provide a complete general account of deductive properties and relations, because our definitions of many of these in terms of entailment also used the ideas of contradiction and absurdity. The laws providing for inconsistency via absurdity and for alternatives via assumptions govern these ideas but they were stated for relative exhaustiveness rather than entailment. In the next subsection, we will look at laws for ⊥. Laws for contradiction will be considered by way of the account of negation in 3.2.1. The basic idea is that a pair of contradictory sentences each exclude the other and are the weakest way of doing that in the sense that each is entailed by any set of assumptions that excludes the other.

Glen Helman 01 Aug 2013