Inference in the abstract Introduction
Validity of inferences or potential inferences is a relation between premises and conclusions. In formal logic we take it to be a binary relation, always relating exactly two things. Since an argument may have any number of premises, zero or more,^{1} this means we have to collect the premises it does have into a set, and then construe valid argument as a relation between this set (one object) and the conclusion formula (another object).
Recall that we are concerned with argument forms, which we represent by using the formulae of an invented formal language in place of actual sentences. Where X is a set of formulae and A is a single formula, we call the pair X : A a sequent, and write
X ⊢ A 
In order to reduce clutter, we use a comma rather than '∪' to symbolise set union, and omit the curly braces from set notation wherever possible. Thus we write
X, Y, A ⊢ B 
X ∪ Y ∪ {A} ⊢ B 
Before going into any detail of the formal language or its logic, we may note some important features of any relation of logical consequence defined over any language whatsoever.
 Reflexivity:

X ⊢ A
if A is a member of X. In
particular, A ⊢
A.
This is simple enough: if A is in the database X then the query A succeeds without requiring any inference steps. There is no way for everything in X to be true without A being true if A happens to be one of the things in X.Example Socrates is a footballer;
Therefore Socrates is a footballer.  Monotonicity:
 If X
⊢ A then for any bigger
set Y of which X is a subset,
Y ⊢
A.
This means that whatever follows from just some of the assumptions or data follows from the set, or in other words that adding more information cannot destroy any inferences.Example Socrates is a footballer;
Aristotle is a postman;
All footballers are bipeds;
Canberra is bigger than Goulburn;
Therefore Socrates is a biped.  Transitivity:
 If X
⊢ A and Y,
A ⊢ B then
X, Y ⊢
B.
This is a familiar idea: if you can derive some lemmas from the axioms of a theory, and then derive a theorem from the lemmas, you can chain the arguments together to obtain a derivation of the theorem from the axioms. This important principle is often called "cut" in the literature of proof theory, because it allows the formula A to be snipped out of the two sequents when they are combined.Example The argument "Socrates is a footballer; all footballers are bipeds; so Socrates is a biped" is valid. The argument "Socrates is a biped; no goat are bipeds; so Socrates is not a goat" is also valid. Putting these two together, we arrive at the valid argument: "Socrates is a footballer; all footballers are bipeds; no goats are bipeds; therefore Socrates is not a goat."
Any relation satisfying the above three principles (reflexivity, monotonicity and transitivity) is called a consequence relation. Of course, some relations may just happen to be consequence relations in this technical sense without having much to do with reasoning  simply because they satisfy the three conditions. For example, suppose X is a set of people and A is a person. We might say that X is "ancestrally relevant" to A (I just made this up) to mean that either A or some descendent of A is in X. Then the relation of being ancestrally relevant is a consequence relation. Check the three conditions if you don't believe it.
The abstract definition of consequence, then is only a small step towards an adequate theory of logic. It is important, though, as it sets out some minimal conditions that such a theory should meet. With that as a basis, we may now proceed to flesh out the account in stages.


^{1} The splendid phrase 'zero or more' is used only by logicians. Having taken this course, you can use it too. 