Inference in the abstract Introduction
Our goal in logic is to study the relationship between the premises of argument forms and their conclusions, with a view to devising a theory as to what makes some forms valid and others invalid. Having done this, we can then seek significance for our theory by mapping the argument forms back onto actual arguments in natural languages (or in artificial languages if those work in a similar way).
Note that although an argument can only have one conclusion, it may have any number of premises—zero or more. For example, the argument
All footballers are bipeds;
Socrates is a footballer;
Therefore Socrates is a biped.
has two premises and a conclusion, making three sentences altogether, so in order to reason about its validity we seem to need a ternary relation (connecting three things). On the other hand, the argument
Socrates is both a footballer and a philosopher; Therefore some philosopher is a footballer.
has only one premise and a conclusion, so it calls for a binary relation (connecting just two sentences), while
All footballers are bipeds;
Socrates is a footballer;
Socrates is a philosopher;
Therefore some philosopher is a biped.
contains four sentences… and so forth. It would be inconvenient to formulate our theory of validity using a different relation for each of these cases; we want to say that validity is the same concept irrespective of the number of premises.
The splendid phrase 'zero or more' is used only by logicians. Having taken this course, you can use it too.The solution is to note that although there may be any number of premises, there is always exactly one set of premises, so by taking validity to relate that set to the conclusion, we simplify the theory in just the right way. Validity is a binary relation, since the set of premises is one object, and the conclusion is another object, so there are always just two sides to the relation.
Recall that we are concerned with argument forms, which we represent by using the formulae of an invented formal language in place of actual sentences. Where X is a set of formulae and A is a single formula, we call the pair X : A a sequent, and write
X ⊢ A |
In order to reduce clutter, we use a comma rather than '∪' to symbolise set union, and omit the curly braces from set notation wherever possible. Thus we write
X, Y, A ⊢ B |
X ∪ Y ∪ {A} ⊢ B |
Before going into any detail of the formal language or its logic, we may note some important features of any relation of logical consequence defined over any language whatsoever. The "reflexivity" and "transitivity" conditions are not strictly speaking the same as the properties of reflexivity and transitivity of binary relations (see below) but are slight generalisations suitable to the present case. No confusion should resuot from this.
- Reflexivity:
-
A ⊢ A.
This is simple enough: if {A} literally is the database, then of course the query A succeeds without requiring any inference steps: by the time you have assumed the premise, you are already at the goal. There is no way for the member of the set {A} to be true without A being true, because A just is the member of that set!Example Socrates is a footballer;
Therefore Socrates is a footballer. - Monotonicity:
- If X
⊢ A then for any bigger
set Y of which X is a subset,
Y ⊢
A.
This means that whatever follows from just some of the assumptions or data follows from the set, or in other words that adding more information cannot destroy any inferences.Example Socrates is a footballer;
Aristotle is a postman;
All footballers are bipeds;
Canberra is bigger than Goulburn;
Therefore Socrates is a biped. - Transitivity:
- If X
⊢ A and Y,
A ⊢ B then
X, Y ⊢
B.
This is a familiar idea: if you can derive some lemmas from the axioms of a theory, and then derive a theorem using the lemmas, you can chain the arguments together to obtain a derivation of the theorem from the axioms. In general, if you are carrying out some reasoning, it doesn't matter how many intermediate steps you go through; as long as each individual step is valid, the resulting argument from the initial premises to the final conclusion is valid. This important principle is often called "cut" in the literature of proof theory, because it allows the formula A to be snipped out of the two sequents when they are combined.Example Socrates is a footballer, and all footballers are bipeds; so Socrates is a biped.
Socrates is a biped, but no goats are bipeds; so Socrates is not a goat.
Putting these two arguments together:
Socrates is a footballer;
All footballers are bipeds;
No goats are bipeds;
Therefore Socrates is not a goat.
In the examnple, the instance of X is the set
{ "Socrates is a footballer", "All footballers are bipeds" }
while Y stands for the one-element set
{ "No goats are bipeds" }.
The conclusion B is that Socrates is not a goat. The intermediate proposition that he is a biped, represented by the formula A, is used along the way but does not occur in the final argument.
Any relation satisfying the above three principles (reflexivity, monotonicity and transitivity) is called a consequence relation. Of course, some relations may just happen to be consequence relations in this technical sense without having much to do with reasoning - simply because they satisfy the three conditions. For example, suppose X is a set of people and A is a person. We might say that X is "ancestrally relevant" to A (I just made this up) to mean that either A or some descendent of A is in X. Then the relation of being ancestrally relevant is a consequence relation. Check the three conditions if you don't believe it.
Tarski Tarski originally included an extra condition of compactness: that if A is a consequence of X then A is a consequence of some finite subset of X, but this condition excludes some systems that seem quite interesting as logics, so more modern accounts do not insist on compactness. Other authors sometimes insist that a consequence relation should be structural in the sense that it should be closed under some notion of substitution (see below) but here we prefer to see structural consequence relations as an important speciual case of the more general concept. See Citkin for an excellent detailed account.The abstract definition of consequence, then is only a small step towards an adequate theory of logic. It is important, though, as it sets out some minimal conditions that such a theory should meet. With that as a basis, we may now proceed to flesh out the account in stages.