2.2.1. Proofs as trees
Our study of entailments involving conjunction will rest on the principles discussed in 2.1.1. These are shown below, in symbolic form on the left and in English on the right:
φ ∧ ψ ⊨ φ |
both φ and ψ ⊨ φ
|
φ ∧ ψ ⊨ ψ |
both φ and ψ ⊨ ψ
|
φ, ψ ⊨ φ ∧ ψ |
φ, ψ ⊨ both φ and ψ.
|
We will refer to the first two of these patterns as extraction (left and right extraction to distinguish them) and to the third simply as conjunction. To establish particular cases of entailment, we will want to put together instances of these general patterns and, eventually, instances of other patterns, too. We will look at three different sorts of notation for doing this. It is the third, to be introduced in the next subsection, that we will employ in the end; but two others, which we will consider in the this subsection are useful for understanding the one we will actually use.
What may be the most direct notation for combining principles of entailment employs something like the two-dimensional form we have used for arguments, with the conclusion below the premises and marked off from them by a horizontal line. In order to make the premises of a multi-premised argument available to serve as conclusions of further argument, we will spread them out horizontally. In this style of notation, the basic patterns for conjunction take the following forms (where abbreviations of their names are used as labels):
φ ∧ ψ
Ext
φ |
φ ∧ ψ
Ext
ψ |
φ
ψ
Cnj
φ ∧ ψ |
Arguments exhibiting these patterns can be linked by treating the premises of one argument as conclusions of other arguments. For example, the following shows that (A ∧ B) ∧ C is a valid conclusion from the two premises A and B ∧ C:
The ability to put entailments together in this way rests on the general laws of entailment discussed in 1.4.6. The law for premises enables us to begin; it shows that the premises A and B ∧ C entail the tips of the branches of this tree-form proof. Repeated uses of the chain law then enable us to add conclusions drawn using the principles for conjunction, and we work our way down the tree showing that the original set of premises entails each intermediate conclusion and, eventually, (A ∧ B) ∧ C. For example, just before the end, we know that our original premises entail each of the premises of the final conclusion—i.e., that A, B ∧ C ⊨ A ∧ B and A, B ∧ C ⊨ C. The chain law then enables us to combine these entailments with the fact that A ∧ B, C ⊨ (A ∧ B) ∧ C (a case of Conjunction) to show that A, B ∧ C ⊨ (A ∧ B) ∧ C.
The simplicity of these tree-form proofs makes them useful for studying the general properties of proofs, but actually writing them out can become awkward. In the next section, we will look at a different sort of notation that makes it easier to write out proofs. It is most closely tied to a different way of stating the basic principles for conjunction that builds in the use of the chain law that was described above. Rather than pointing to particular valid arguments involving conjunction, these principles describe general conditions under which any arguments involving conjunction are valid.
Law for conjunction as a premise. Γ, φ ∧ ψ ⊨ χ if and only if Γ, φ, ψ ⊨ χ
Law for conjunction as a conclusion. Γ ⊨ φ ∧ ψ if and only if both Γ ⊨ φ and Γ ⊨ ψ
These principles can be seen to hold by the comparing the sort of possible worlds each side of the if and only if rules out.
The if parts of these principles reflect the validity of arguments of the forms Ext and Cnj, respectively, together with the chain law. For example, since Ext tells us that each of φ and ψ is a valid conclusion from φ ∧ ψ, any valid conclusion from φ and ψ, perhaps along with other premises Γ, will follow from φ ∧ ψ along with those other premises, and this is what the if part of the first principle says. The only if parts tell us that the validity of the arguments on their left sides can always be established in this way. For example, the only if part of the second tells us that, if a conjunction is a valid conclusion, then the premises needed to reach it by Cnj are bound to be valid conclusions also; so it should be possible to establish what we need to in order to apply Cnj. So these two principles can be understood to tell us what can be shown by tree-form proofs that employ the patterns Ext and Conj.
When conjunction is the only connective employed in our analysis of sentences, applying these two laws for conjunction repeatedly will eventually bring us back to arguments whose premises and conclusions are all unanalyzed components. A simple argument like that will be valid if its conclusion is among its premises; and, if the unanalyzed components making it up are logically independent, that is the only way it can be valid. This means that the two principles for conjunction combine with the law for premises to provide a complete account of validity for arguments involving only conjunction.
These two principles could be used to show that A, B ∧ C ⊨ (A ∧ B) ∧ C—that is, to show what we showed in the earlier tree-form proof—in the following way:
Like the tree-form proofs, this second way of writing proofs is being used only temporarily, but it is useful to have a name for it. It is close in form to a standard notation for proofs in which the separate claims of entailment are called sequents, so we will refer to proofs of this sort as sequent proofs. Sequent proofs will have the form of trees, but they will be tree of sequents rather than the trees of sentences we see in tree-form proofs.
A sequent proof can be read top to bottom as a tree-form proof like the earlier one except for two differences: (i) we are now reasoning about claims of entailment rather than unspecified sentences, and (ii) we are using principles of entailment rather than valid patterns of argument that apply to sentences of any sort. The examples we have looked have the further difference that the sequent proof begins with no assumptions—so the horizontal lines at the top have nothing above them—since the premises for principles at the next level down are provided by the law for premises, and that principle states categorically that certain arguments are valid (rather than making a conditional if and only if claim).
A sequent proof can also be read from the bottom up—that is, it can be understood to grow like a tree with a claim of entailment at its root. Looking at in this way, a sequent proof serves to investigate the conditions under which the entailment at its root holds. Or, more pointedly, it serves to search for ways in which that entailment might fail, ways of dividing its premises from its conclusion. In the example above, that search ends at the tips of branches when we run into arguments whose conclusions are among their premises.
Any notation for writing out proofs is more than we need to settle questions of entailment involving only conjunction. But the complications introduced by the logical forms we will consider in later chapters make it useful to have some system of notation, and, because we have simpler ways of seeing that entailments hold in the case of conjunction, it will be easier to see how this notation works if we develop it now. Neither the tree-form proofs nor the sequent proofs are the sort of notation we will actually adopt; but that more compact notation will have ties to both of them, and it will useful to look at them from time to time since they do exhibit quite clearly some features of the compact notation that are disguised by its compactness.