THE LOGIC NOTES

Deduction Introduction


We shall now start to construct a formal calculus for the rigorous proof of logically valid sequents. Before coming to details, however, we must say what in general is to count as a proof in the system, and to explain this we begin with the notion of a derivation.

An argument consists simply of premises and conclusion, and it may be quite unclear how, if at all, the conclusion is supposed to follow from the premises. For example, the axioms of a theory, such as Euclid's geometry, can serve as premises and any theorem of that theory can be the conclusion. The argument so formed is valid, but in the case, say, of Pythagoras's theorem it may take real ingenuity to discover reasons for thinking this to be so. The axioms, after all, just say things like 'Any two points lie on exactly one straight line' and 'Any straight line can be produced indefinitely', while the conclusion is that the square of the hypotenuse of a right-angled triangle is the sum of the squares of the other two sides. In this case, the proof is easy enough once someone has shown you how to do it, but it is not trivial and was at one time a real achievement. What we do to make an argument like this convincing is to fill in the gap between premises and conclusion with intermediate steps, perhaps with comments saying how each of the successive statements follows from earlier ones. By this means the conclusion is derived from the premises. Arguments in a system, however difficult and unobvious they may be, can thus be reduced to small, simple, obviously valid steps which can be classified and codified. The proof of sequents in formal logic is reducible to about a dozen small-scale rules whose operation can be iterated to produce intricate derivations.

There is a worthwhile parallel to this reduction in the theory and practice of computer programming. A programming language consists of a fairly small number of special commands and simple constructions—assigning a value to a variable, making control jump to a given place in the code, evaluating an arithmetical expression and the like—which are strung together in their thousands to make sophisticated programs. The complexity arises from the iteration, not the basic parts. Another analogy, slightly less close but still good, is the construction of the English language in all its intricacy and richness out of a finite stock of words using only a few basic grammatical devices. Again, think of all the elaborate and varied buildings that can be made from the same few simple ingredients such as bricks, beams, tiles etc. Or think of the infinite variety of music stemming from a few dozen notes variously arranged, or of the amazing range of pictures—photographs, say—which can be built up from regularly placed dots of three primary colours. In all such cases the source of diversity and pattern is the endless possibility of combination. Most of these notes are designed to introduce you to the base level of proof construction, to the simple ingredients and their small-scale application. Thus, to pursue the analogies, it is like teaching you to program in assembly language rather than at the high level; it is more bricklaying than architecture.

At its most basic, a proof of a sequent XA is a derivation of A from X. In the simple case, a derivation is nothing but a finite sequence of formulae. In order to be a derivation of A, the last formula in it must, of course, be A; in order for it to be a (simple) derivation from X , every formula in it which is not an immediate consequence of formulae earlier in the sequence must be a member of X. The notion of "immediate consequence" can be differently defined for different logical calculi. It is whatever is stipulated to be a one-step inference, not requiring reduction to anything more primitive. For the calculus we are going to use, we need the slightly more complicated notion of a compound derivation . This is like a simple derivation except that the items in the finite sequence may be formulae or may be derivations in their own right. Within a derivation D, an item S is a subderivation of B from Y if it is itself a finite sequence of formulae and possibly further subderivations, ending with B, in which every formula which is not an immediate consequence of earlier items and does not already occur earlier in D is in Y. The number of formulae in the whole structure, including those in its substructures, must be finite. The purpose of this more complicated definition will become clear later, when we see how it provides an intuitive account of the logic of certain connectives.

The logical calculus we shall develop here is called "natural deduction", because each of the primitive rules of inference directly reflects the meaning of a connective which it serves to introduce or to eliminate. This is the easiest form of logical calculus for people to understand and manipulate, though machines tend to work better with formulations that sacrifice intuitive meaning for simplicity.

There are many ways to write out proofs in a readable form. In order to avoid getting too entangled in the details of any one of them, and more importantly to avoid confusing what is conceptually important with what is superficial, we shall look at more than one representation. Sometimes we shall write proofs out sequentially as a series of "lines of proof", thus representing the derivation sequences literally. Sometimes we shall draw proofs in two dimensions, making it clear how the relation of immediate consequence structures them into trees. The nature of these different ways of presenting one and the same proof will also become clear when we see examples.