THE LOGIC NOTES

Inference in the abstract Introduction

Validity of inferences or potential inferences is a relation between premises and conclusions. In formal logic we take it to be a binary relation, always relating exactly two things. Since an argument may have any number of premises, zero or more, this means we have to collect the premises it does have into a set, and then construe valid argument as a relation between this set (one object) and the conclusion formula (another object).

The splendid phrase 'zero or more' is used only by logicians. Having taken this course, you can use it too.

For example, the argument

All footballers are bipeds;
Socrates is a footballer;
Therefore Socrates is a biped.

has two premises and a conclusion, making three sentences altogether, so in order to reason about its validity we seem to need a ternary relation (connecting three things). On the other hand, the argument

Socrates is both a footballer and a philosopher;
Therefore some philosopher is a footballer.

has only one premise, so it calls for a binary ralation (connecting two things), while

All footballers are bipeds;
Socrates is a footballer;
Socrates is a philosopher;
Therefore some philosopher is a biped.

contains four sentences… and so forth. It would be very inconvenient to formulate our theory of validity using a different relation for each of these cases; we want to say that validity is the same concept irrespective of the number of premises. The solution is to note that although there may be any number of premises, there is always exactly one set of premises, so by taking validity to relate that set to the conclusion, we simplify the theory in just the right way.

Recall that we are concerned with argument forms, which we represent by using the formulae of an invented formal language in place of actual sentences. Where X is a set of formulae and A is a single formula, we call the pair X : A a sequent, and write

X   ⊢   A
to mean that A follows logically from [the formulae in] X, or in other words that the argument from X to A is valid.

In order to reduce clutter, we use a comma rather than '∪' to symbolise set union, and omit the curly braces from set notation wherever possible. Thus we write

X, Y, A   ⊢   B
for instance, to abbreviate the more cumbersome
XY ∪ {A}   ⊢   B

Before going into any detail of the formal language or its logic, we may note some important features of any relation of logical consequence defined over any language whatsoever. The "reflexivity" and "transitivity" conditions are not strictly speaking the same as the properties of reflexivity and transitivity of binary relations (see below) but are slight generalisations suitable to the present case. No confusion should resuot from this.

Reflexivity:
      X   ⊢   A   if A is a member of X. In particular,   A   ⊢   A.

This is simple enough: if A is in the database X then the query A succeeds without requiring any inference steps. There is no way for everything in X to be true without A being true if A happens to be one of the things in X.
    Example Socrates is a footballer;
Therefore Socrates is a footballer.
Reflexivity on its own gives arguments which are extremely boring, but obviously valid.
Monotonicity:
      If   X   ⊢   A   then for any bigger set Y of which X is a subset,   Y   ⊢   A.

This means that whatever follows from just some of the assumptions or data follows from the set, or in other words that adding more information cannot destroy any inferences.
    Example Socrates is a footballer;
Aristotle is a postman;
All footballers are bipeds;
Canberra is bigger than Goulburn;
Therefore Socrates is a biped.
The irrelevant premises can be ignored, as they don't affect the valid inference from the useful ones.
Transitivity:
      If   X   ⊢   A   and   Y, A   ⊢   B   then   X, Y   ⊢   B.

This is a familiar idea: if you can derive some lemmas from the axioms of a theory, and then derive a theorem from the lemmas, you can chain the arguments together to obtain a derivation of the theorem from the axioms. This important principle is often called "cut" in the literature of proof theory, because it allows the formula A to be snipped out of the two sequents when they are combined.
    Example Socrates is a footballer, and all footballers are bipeds; so Socrates is a biped.
Socrates is a biped, but no goats are bipeds; so Socrates is not a goat.

Putting these two arguments together:
        Socrates is a footballer;
        All footballers are bipeds;
        No goats are bipeds;
        Therefore Socrates is not a goat.

In the examnple, the instance of X is the set

{ "Socrates is a footballer", "All footballers are bipeds" }

while Y stands for the one-element set

{ "No goats are bipeds" }.

The conclusion B is that Socrates is not a goat. The intermediate proposition that he is a biped, represented by the formula A, is used along the way but does not occur in the final argument.

Any relation satisfying the above three principles (reflexivity, monotonicity and transitivity) is called a consequence relation. Of course, some relations may just happen to be consequence relations in this technical sense without having much to do with reasoning - simply because they satisfy the three conditions. For example, suppose X is a set of people and A is a person. We might say that X is "ancestrally relevant" to A (I just made this up) to mean that either A or some descendent of A is in X. Then the relation of being ancestrally relevant is a consequence relation. Check the three conditions if you don't believe it.

Tarski Tarski originally included an extra condition of compactness: that if A is a consequence of X then A is a consequence of some finite subset of X, but this condition excludes some systems that seem quite interesting as logics, so more modern accounts do not insist on compactness. Other authors sometimes insist that a consequence relation should be structural in the sense that it should be closed under some notion of substitution (see below) but here we prefer to see structural consequence relations as an important speciual case of the more general concept. See Citkin for an excellent detailed account.

The abstract definition of consequence, then is only a small step towards an adequate theory of logic. It is important, though, as it sets out some minimal conditions that such a theory should meet. With that as a basis, we may now proceed to flesh out the account in stages.