1.4.6. Laws for entailment
Most of the laws of deductive reasoning we will study will be generalizations about specific logical forms that will be introduced chapter by chapter, but some very general laws can be stated at this point. We have already seen some of these. We have just seen the laws tying inconsistency to Absurdity alternatives to assumptions. And the principles of reflexivity and transitivity for implication discussed in 1.2.3 can be generalized to provide basic laws for entailment and conditional exhaustiveness. We will look first at the case of entailment.
Two basic laws suffice to capture the basic properties of entailment considered in its own right:
Law for premises. Any set of assumptions entails each of its members. That is, Γ, φ ⊨ φ (for any sentence φ and any set Γ).
Chain law. A set of assumptions entails anything entailed by things it entails. That is, if Γ ⊨ φ for each assumption φ in Δ and Δ ⊨ ψ, then Γ ⊨ ψ (for any sentence ψ and any sets Γ and Δ).
Taken together, these laws tell us that the relation which holds between sets Γ and Δ when Γ entails all members of Δ is both reflexive and transitive. For the law for premises tells us that any set entail every member of itself. And, if Γ entails every member of Δ and Δ entails every member of the Σ, then Γ also entails every member of Σ by the chain law. Although this reflexive and transitive relation is, like conditional exhaustiveness, a relation between sets of sentences, they are different relations, and we will see later that conditional exhaustiveness is neither reflexive nor transitive.
These two principles have as a consequence two further principles the addition and subtraction of assumptions that will play an important role in our study of entailment:
Monotonicity. Adding assumptions never undermines entailment. That is, if Γ ⊨ φ, then Γ, Δ ⊨ φ (for any sets Γ and Δ and any sentence φ).
Law for lemmas. Any assumption that is entailed by other assumptions may be dropped without undermining entailment. That is, if Γ, φ ⊨ ψ and Γ ⊨ φ, then Γ ⊨ ψ (for any sentence φ and set Γ).
Each of these principles is based on both the law for premises and the chain law. In the case of the first, the law for premises tells us that Γ together with Δ entails every member of Γ alone, so Γ, Δ ⊨ φ if Γ ⊨ φ by the chain law. The assumption of the second that Γ ⊨ φ combines with the law for premises to tell us that Γ entails every member of the result of adding the further assumption Φ, and the chain law then tells us that Γ entails anything Ψ entailed by this enlarged set of assumptions.
The term lemma can be used for a conclusion that is drawn not because it is of interest in its own right but because it helps us to draw further conclusions. The second law tells us that if we add to our premises Γ a lemma φ that we can conclude from them, anything ψ we can conclude using the enlarged set of premises can be concluded from the original set Γ.
The idea behind the law of monotonicity is that adding assumptions can only make it harder to find a possible world that divides the assumptions from the conclusion, so, if no possible world will divide Γ from φ, we can be sure that no world will divide from φ the larger set of assumptions we get by adding some further assumptions Δ. The term monotonic is applied to trends that never change direction. More specifically, it is applied to a quantity that does not both increase and decrease in response to changes in another quantity. In this case, it reflects the fact that adding assumptions will never lead to a decrease in the sets of alternatives rendered exhaustive by them and adding alternatives will never lead to a decrease in the sets of assumptions rendering them exhaustive.
It is a distinguishing characteristic of deductive reasoning that a principle of monotonicity holds. For, when reasoning is not risk free, additional data can show that a initially well-supported conclusion is false and do so without undermining the original premises on which the conclusion was based. If such further data were added to the original premises, the result would no longer support the conclusion. This means that risky inference is, in general, non-monotonic in the sense that additions to the premises can reduce the set of conclusions that are justified. This is true of inductive generalization and of inference to the best explanation of available data, but the term non-monotonic is most often applied to another sort of non-deductive inference, an inference in which features of typical or normal cases are applied when there is no evidence to the contrary. One standard example is the argument from the premise Tweety is a bird to the conclusion Tweety flies. This conclusion is reasonable when the premise exhausts our knowledge of Tweety; but the inference is not free of risk, and the conclusion would no longer be reasonable if we were to add the premise that Tweety is a penguin.
The law for premises and the chain can be shown to give a complete account of the general laws of entailment in the sense that any relation between sets of sentences and sentences that obeys them is an entailment relation for some set of possibile worlds and assignment of truth values to sentences in each world. But this is not to say that they provide a complete general account of deductive properties and relations, because our definitions of the may of these in terms of entailment also used the ideas of contradiction and the absurdity ⊥. The laws providing for inconsistency via absurdity and alternatives via assumptions govern these ideas but they were stated for conditional exhaustiveness rather than entailment. Although laws for inconsistency and contradictoriness might be stated in terms of entailment, doing so now would pointlessly anticipate later topics, so we will let the two laws we began with suffice.
Let us look briefly at conditional exhaustiveness. As noted earlier, it is neither reflexive nor transitive. Although Γ ⊨ Γ whenever Γ has at least one member, we have already seen that ∅ ⊭ ∅. And if conditional exhaustiveness were transitive every sentence φ would imply every other sentence ψ since φ ⊨ φ, ψ and φ, ψ ⊨ ψ. In spite of this, we can state laws for relative exhaustiveness that are somewhat analogous to the basic laws for entailment. First two basic laws:
Repetition. A set of assumptions renders exhaustive any set of alternatives that it overlaps. That is, Γ, φ ⊨ φ, Δ (for any sentence φ and any sets Γ and Δ).
Chain law. If a set of sentences each of which is a sufficient exception to a claim of exhaustiveness itself renders exhaustive a set of sentences each of which is a sufficient additional assumption for the claim, the claim holds without exceptions or additional assumptions. Suppose (i) Γ ⊨ φ, Δ for each φ in Σ, (ii) Γ, ψ ⊨ Δ for each ψ in Θ, and (iii) Σ ⊨ Θ. Then Γ ⊨ Δ (for any sentences φ and ψ and any sets Γ, Δ, Σ, and Θ).
Although the first of these is similar to the law for premises, it is given a different name because this law is as much about alternatives as about assumptions. The metaphor of a chain does not apply very directly to the second law, but this law does play a role for conditional exhaustiveness that is analogous to the chain law for entailment. Its verbal statement is more complex than the other laws, and it may not be clear how to fit it with what follows. The idea is that condition (i) tells us that the claim of conditional exhaustiveness of Δ given Γ holds when we add to Δ any member φ of Σ as a further alternative (i.e., as an exception to the claim). Condition (iii) guarantees the exhaustiveness of Θ given Σ, and condition (ii) tells us that the exhaustiveness of Δ holds given Γ together with any member ψ of Θ as a further assumption. The law then holds because, if each member of Γ is true, then by (i) we must have at least one member of Δ true unless each member of Σ is true; and, if the latter is the case, by (iii) we must have at least one member of Θ true and, by (ii), this is enough to insure that at least one member of Δ is true as we wished.
As with entailment, we will consider two laws that follow from this basic pair.
Monotonicity. Adding assumptions or alternatives never undermines conditional exhaustiveness. That is, if Γ ⊨ Δ, then Γ, Σ ⊨ Δ, Θ (for any sets Γ, Δ, Σ, and Θ);
Cut. An alternative may be dropped if adding it as an assumption is enough to render the remaining alternatives exhaustive. That is, if Γ, φ ⊨ Δ and Γ ⊨ φ, Δ, then Γ ⊨ Δ (for any sentence φ and any sets Γ and Δ).
The second is relatively close in form to the law for lemmas but it given a given a different name, as was the repetition law, because assumptions and alternatives play parallel roles in it. The significance of the term cut lies simply in its effect of dropping the sentence φ. The idea behind that is that, given the truth of all members of Γ, at least one of the alternatives Δ to be true in a case where φ is true because Γ, φ ⊨ Δ and in a case where φ false because Γ ⊨ φ, Δ and φ cannot be the alternative that is true.
One of the reasons for considering conditional exhaustiveness is that a law providing alternatives via assumptions follows from the basic laws. This law takes the following form:
Alternatives via assumptions. If both φ, ψ ⊨ and ⊨ φ, ψ (i.e., φ and ψ are contradictory), then Γ ⊨ φ, Δ if and only if Γ, ψ ⊨ Δ.
To see why this follows, suppose that φ and ψ are contradictory and Γ ⊨ φ, Δ. We can apply the chain law with Γ, ψ ⊨ Δ as the claim we wish to establish and φ, ψ ⊨ (i.e., φ, ψ ⊨ ∅) as the claim cited in condition (iii). Because the Θ mentioned in the law is the empty set ∅ in this case, there is nothing to show for (ii) since there is no member of Θ for which it might fail; and (i) says merely that Γ, ψ ⊨ φ, Δ, which holds by monotonicty (since we have assumed that Γ ⊨ φ, Δ), and Γ, ψ ⊨ ψ, Δ, which holds by repetition. We can use ⊨ φ, ψ in a similar way to show Γ ⊨ φ, Δ when we suppose Γ, ψ ⊨ Δ.
We cannot expect to get the law providing for inconsistency via Absurdity without some principle stating the logical properties of ⊥ (something we will consider in the next subsection), but we can say that Γ ⊨ (i.e., Γ is inconsistent) if Γ ⊨ φ and φ ⊨ (i.e., φ is absurd). (The argument applies the chain law in a way similar too, but simpler than, the one we just saw.) In the other direction, knowing that Γ is inconsistent does not enable us to conclude that it entails some inconsistent sentence because we don’t yet have a law telling us that there are any inconsistent sentences. But we can say that if Γ is inconsistent, it entails any inconsistent sentence there is because an inconsistent set entails any sentence whatsoever: we know that if Γ ⊨ (i.e., Γ ⊨ ∅) then Γ ⊨ φ, for any sentence φ, by monotonicity. This gives us the following law pointing the way to, if not providing, inconsistency via absurdity:
Inconsistency via absurdity. If φ ⊨, then Γ ⊨ if and only if Γ ⊨ φ.