\documentclass{entcs} % entcs ignores point size
%\usepackage{makeidx}
%\usepackage{acmnew-xref}
\usepackage{latexsym}
\usepackage{amssymb}
%\usepackage{exptrees}
\renewcommand{\today}{\number\day\
\ifcase\month\or
January\or February\or March\or April\or May\or June\or
July\or August\or September\or October\or November\or December\fi
\ \number\year}
%\setlength \parskip {1.0ex}
\setcounter{secnumdepth}2
\setcounter{tocdepth}2
% \pagestyle{myheadings}
\newcommand\comment[1]{}
\date{\today}
\newcommand\cfile{}
%\renewcommand{\thepage}{\roman{page}}
\newcommand\shrinklist[1]{
\setlength\parskip{#1\parskip}
\setlength\itemsep{#1\itemsep}
\setlength\itemindent{#1\itemindent}
\setlength\topsep{#1\topsep}
}
\newcommand\invrule[2]{
% for use in non-math mode, args interpreted in math mode
\begin{tabular}{@{}c@{}}
$ #1 $ \\ \hline \hline $ #2 $
\end{tabular}}
\newcommand\slrule[2]{ % similar to \frac, except
% for use in non-math mode, args interpreted in math mode
\begin{tabular}{@{}c@{}}
$ #1 $ \\ \hline $ #2 $
\end{tabular}}
%\newcommand\proof{\noindent\textbf{Proof.}}
\newcommand\src[1]{source file \texttt{#1}}
\newcommand\apsrc[1]{Appendix \ref{app:code}}
%\newcommand\seesrc[1]{(see source file \texttt{#1}, \apsrc{#1})}
\newcommand\seesrc[1]{(see source file \texttt{#1})}
%\newcommand\insrc[1]{in source file \texttt{#1} (see \apsrc{#1})}
\newcommand\insrc[1]{in source file \texttt{#1}}
\newcommand\page[1]{page~\pageref{#1}}
\newcommand\fig[1]{Fig.~\ref{fig:#1}}
\newcommand\ch[1]{Chapter~\ref{ch:#1}}
\newcommand\tbl[1]{Table~\ref{tbl:#1}}
\newcommand\thrm[1]{Theorem~\ref{thrm:#1}}
%\newcommand\lem[1]{Lemma~\ref{lem:#1}}
\newcommand\ttl[1]{\tti{#1}\label{#1}}
\newcommand\ttdl[1]{\texttt{#1}\tti{#1}\label{#1}}
\newcommand\ttdef[1]{\texttt{#1}\tti{#1}}
\newcommand\tti[1]{\index{#1@\texttt{#1}}}
\newcommand\bfi[1]{\textbf{#1}\index{#1|textbf}}
\newcommand\dRA{{\boldmath $\delta$}\textbf{RA}}
\newcommand\dKt{{\boldmath $\delta$}\textbf{Kt}}
\newcommand\NRA{\textbf{NRA}}
\newcommand\RA{\textbf{RA}}
%\newcommand\M{$\mathcal M$}
\newcommand\M{\ensuremath{\mathcal M}}
\newcommand\Mp{$\mathcal M'$}
\newcommand\up[1]{\vspace{-#1ex}}
\newcommand\vpf{\vspace{-3ex}} % to reduce space before proof tree
\newcommand\1{\textbf 1}
\newcommand\0{\textbf 0}
% automatically defined in LLNCS style
%\newtheorem{thm}{Theorem}%[chapter]
%\newtheorem{lemma}[thm]{Lemma}
%\newtheorem{defn}[thm]{Definition}
%\newtheorem{cor}{Corollary}[thm]
%\renewcommand\thecor{\thethm(\arabic{cor})}
\pagestyle{plain}
\raggedbottom
\begin{document}
\begin{frontmatter}
\title{Embedding Display Calculi into Logical Frameworks :
Comparing Twelf and Isabelle}
\author{Jeremy E. Dawson\thanksref{smallgrant}},
\address{
Department of Computer Science,
Faculty of Engineering and Information Technology,
Australian National University}
\thanks[smallgrant]{Supported by an Australian Research Council Small Research Grant.}
\author{Rajeev Gor\'e\thanksref{qeII}},
\address{
Automated Reasoning Group,
Computer Sciences Laboratory,
Research School of Information Science and Engineering,
Australian National University \\
and \\
Department of Computer Science,
Faculty of Engineering and Information Technology,
Australian National University}
\thanks[qeII]{Supported by an Australian Research Council Queen Elizabeth II
Fellowship.}
\begin{abstract}
Logical frameworks are computer systems which allow a user to formalise
mathematics using specially designed languages based upon mathematical logic
and Church's theory of types.
They can be used to derive programs from logical specifications, thereby
guaranteeing the correctness of the resulting programs.
They can also be used to formalise rigorous proofs about logical systems.
We compare several methods of implementing the display (sequent) calculus
\dRA\ for relation algebra in the logical frameworks Isabelle and Twelf.
We aim for an implementation enabling us to formalise,
within the logical framework,
proof-theoretic results such as the cut-elimination theorem for \dRA\,
and any associated increase in proof length.
We discuss issues arising from this requirement.
%
%We then show how the implementation can be used to prove results comparing
%alternative formalisations of relation algebra
%from a proof-theoretic perspective.
\end{abstract}
\begin{keyword}
logical frameworks, higher-order logics,
proof systems for relation algebra,
non-classical logics, automated deduction,
display logic.
\end{keyword}
\end{frontmatter}
\sloppy
%\newpage
%% Here begins the main text
\section{Introduction and Motivation} \label{intro}
Logical frameworks are computer systems which allow a user to formalise
mathematics using specially designed languages based upon mathematical logic
and Church's theory of types.
They can be used to derive programs from logical specifications, thereby
guaranteeing the correctness of the resulting programs.
They can also be used to formalise rigorous proofs about logical systems.
Relation algebras \cite{maddux-introductory}
are extensions of Boolean algebras;
whereas Boolean algebras model subsets of a given set,
relation algebras model binary relations on a given set.
Relation algebras have operations such as
relational composition and relational converse,
and Boolean operations such as intersection
(conjunction) and complement (negation).
Relation algebras form the basis of relational databases
\cite{elmasri-navathe-database}
and of the specification and proof of correctness of programs,
particularly in the style of Mili \cite{mili-relational}.
Display Logic \cite{belnap-display} is a generalised sequent framework
for non-classical logics,
based on the Gentzen sequent calculus \cite{gallier-logic}.
%It can be applied to give syntactic proof systems for various logics.
Its advantages include a generic cut-elimination theorem, which
applies whenever the rules for the display calculus satisfy certain,
easily checked, conditions.
It is an extremely general logical formalism, applicable to many
(classical and non-classical) logics
in a uniform way \cite{gore-sld-igpl}, \cite{wansing-displaying-modal-logic}.
The generality of the display framework means that essentially the same
meta-level proofs work for many different logics.
A rigorous mechanical
formalisation of such proofs is then widely applicable and worth
pursuing.
In this paper we discuss the implementation of \dRA, a display calculus
for relation algebras, as a case study for exploring various methods for such a
mechanical formalisation of display calculi in general.
\comment{
Display calculi resemble the Gentzen sequent calculus \textbf{LK},
but with significant differences.
For example, the rules for introducing the connective `$\lor$'
(on the right) in \textbf{LK} and some display calculi are
\label{|-v}
$$\frac {\Gamma \vdash \Delta, P, Q}
{\Gamma \vdash \Delta, P \lor Q}(\mbox{\textbf{LK}-}\vdash \lor )
\qquad \textrm{ and } \qquad
\frac {X \vdash P , Q}
{ X \vdash P \lor Q}(\mbox{DC-}\vdash \lor )
$$
Whereas, in \textbf{LK},
$\Gamma$ and $\Delta$ denote comma-separated lists of formulae,
in a display calculus, $X$ denotes a \emph{structure},
which can involve several \emph{structural} connectives (one of which is `,').
A structure can be either a \emph{formula} (such as $P$ or $P \lor Q$)
or the result of a
structure operator acting on smaller structure(s) (such as $P , Q$).
Display calculi extend the Gentzen calculus with extra structural connectives,
such as `*' (meaning negation) and, for \dRA, operators with relational
meanings.
}
Display calculi extend Gentzen's language of sequents with extra, complex,
$n$-ary structural connectives, in addition to Gentzen's sole
structural connective, the ``comma''.
Whereas Gentzen assumed the comma to be associative, commutative
and inherently poly-valent,
display calculi make no such implicit assumptions.
Properties such as these are explicitly stated as structural rules.
For example, \dRA-sequents are built using a binary comma, a binary semicolon,
and unary $*$ and $\bullet$ structural connectives.
Thus, whereas Gentzen's sequents $\Gamma \vdash \Delta$ assume that
$\Gamma$ and $\Delta$ are comma-separated lists of formulae,
\dRA-sequents $X \vdash Y$ assume that
$X$ and $Y$ are complex tree-like structures built from formulae
together with comma, semicolon, $*$ and $\bullet$.
The defining feature of display calculi is that in all logical introduction
rules, the principal formula is always ``displayed'' as the whole of the
right-hand or left-hand side.
For example, the rule $(\mbox{\textbf{LK}-}\vdash \lor )$, shown below left,
is typical of Gentzen's sequent calculi like \textbf{LK}, while the rule
$(\mbox{\dRA-}\vdash \lor )$ shown on the right is typical of display calculi:
$$\frac {\Gamma \vdash \Delta, P, Q}
{\Gamma \vdash \Delta, P \lor Q}(\mbox{\textbf{LK}-}\vdash \lor )
\qquad \qquad
\frac {X \vdash P , Q}
{ X \vdash P \lor Q}(\mbox{\dRA-}\vdash \lor )
$$
Intuitively, to use this display calculus rule
downwards on a sequent $X' \vdash Y'$,
everything other than $(P,Q)$
%that ``belongs on the right of $\vdash$''
must be moved into the complex structure $X$ on the left of $\vdash$,
thereby displaying the structure $(P,Q)$ as the whole of the right-hand side.
There are rules which enable any given structure to be displayed.
After the rule application we can ``undisplay'' the moved material back to its
original position (reversing the display steps used),
so that the sole purpose of this rule is to ``rewrite'' some
$(P,Q)$ to $P \lor Q$ somewhere inside $Y'$.
See \cite{gore-relalg} for a full account.
Isabelle is an automated proof assistant \cite{isabelle-ref}.
Its meta-logic is an intuitionistic typed higher-order logic,
% and typed $\lambda$-calculus,
sufficient to support the built-in inference steps of
higher-order unification and term rewriting.
Isabelle accepts inference rules of the form
``from $\alpha_1, \alpha_2, \ldots, \alpha_n, \mbox{ infer } \beta$''
where the $\alpha_i$ and $\beta$
are expressions of the Isabelle meta-logic,
or are expressions using a new syntax, defined by the user, for some
``object logic''. % $\mathcal F$.
An Isabelle user can encode a particular calculus
$\mathcal C _L$ for some logic $L$ as an ``object logic'' by using
these rule templates to encode the set of inference rules of
$\mathcal C _L$.
%This is supplemented with the user's choice of
%object logic in which to perform proofs.
For example, if $\mathcal C _L$
is a natural deduction calculus, then
the $\alpha_i$ and $\beta$ will be formulae of $L$,
whereas if $\mathcal C _L$ is a sequent calculus, then
the $\alpha_i$ and $\beta$ will be sequents of $\mathcal C _L$.
Such an encoding is called an ``object logic'', even though it is
a (typically natural deduction or sequent) \emph{calculus} for some
particular logic $L$.
In practice most users would build on one of the
comprehensive ``object logics'' already supplied \cite{isabelle-object}.
Twelf \cite{pfenning-schurmann-twelf} is an implementation of the
Edinburgh Logical Framework (LF) \cite{harper-honsell-plotkin-framework},
which is based on a typed $\lambda$-calculus with dependent types.
Logics are represented using a \emph{judgements as types} principle,
where each judgement
of the form $\Gamma, x : Q \vdash y : P$
is identified with the types of its proofs.
Twelf also accepts inference rules of the form
``from $\alpha_1, \alpha_2, \ldots, \alpha_n, \mbox{ infer } \beta$''.
But now, the rule is expressed as the declaration of a
$\lambda$-calculus term of type
$\alpha_1 \to \alpha_2 \to \ldots \to \alpha_n \to \beta$.
Once again, if the calculus we are trying to capture is a natural deduction
calculus, then the $\alpha_i$ and $\beta$ will be formulae (types),
but if the calculus is a sequent
calculus, then the $\alpha_i$ and $\beta$ will be sequents (types).
In an earlier paper \cite{dawson-gore-dra-jelia98},
we described the Isabelle implementation of \dRA,
a display calculus for relation algebra \cite{gore-relalg},
as an object logic of Isabelle.
In that paper we described how we had
used the implementation to prove results comparing
alternative formalisations of relation algebra
from a proof-theoretic perspective.
However we had not proved those results themselves \emph{in} Isabelle,
in the sense we now explain.
Suppose logic $L$ is
a set of formulae which we regard as valid with respect to some semantics,
and $\mathcal P$ and $\mathcal Q$ are two calculi for $L$
(each consisting of axioms and inference rules).
Suppose also that we have mechanical implementations of
$\mathcal P$ and $\mathcal Q$.
Then one can use the implementation of
$\mathcal P$ to derive in $\mathcal P$ every axiom and rule of $\mathcal Q$.
One could then go outside the mechanical system and argue
(by induction on the size, or structure, of a $\mathcal Q$-derivation)
that therefore every $\mathcal Q$-derivable object
(typically a formula or sequent) is also
$\mathcal P$-derivable.
In \cite{dawson-gore-dra-jelia98} we used the Isabelle
\dRA\ implementation
to show the soundness of certain other calculi for relation algebra.
These results were proved in the manner described in the previous paragraph.
An alternative and, when one's aim is the mechanical proof of
proof-theoretic results, better approach would enable
the inductive argument above to be carried out in the mechanical
theorem-prover.
This would require a different style of implementation of the
calculi $\mathcal P$ and $\mathcal Q$
in the prover, so as to enable reasoning about
the shape and form of
$\mathcal P$-derivations and $\mathcal Q$-derivations.
Typically this would require modelling, in the theorem prover,
the ``tree'' of steps
used in a derivation and the fact that each step is an instance of a rule
of the calculus
$\mathcal P$ or $\mathcal Q$;
this in turn requires modelling the form of
the statement of an inference rule of $\mathcal P$ or $\mathcal Q$,
and a method of obtaining particular substitutional instances of such
rules of inference.
The terms ``shallow embedding'' and ``deep embedding''
are often used to distinguish these styles of implementation
\cite{boulton-et-al-hdl}.
Thus our work in \cite{dawson-gore-dra-jelia98} (see Section \ref{pure})
is an example of a shallow embedding, while the work reported
in Section \ref{hol} is a deep embedding.
Mikhajlova and von Wright \cite{mikhajlova-vonwright-isomorphism}
used a deep embedding
in their comparison of classical first-order logic proof systems.
%and substitution of a term for a variable in it.
This is the only other example of a deep embedding of a logical calculus
of which we know.
Other metamathematical results have also been mechanised
-- see, eg, \cite{shankar}.
% Providing both shallow and deep embeddings leads to the possibility of
% linking the two implementations to give an efficient theorem prover
% for \dRA, in the shallow embedding, but making use of proof-theoretic
% results proved in the deep embedding.
% We would then have a theorem prover based directly on machine-checked proof
% theory.
% The only other example
% of a deep embedding of a logical calculus of which we know is found in
% Mikhajlova and von Wright \cite{mikhajlova-vonwright-isomorphism},
% in which they compare classical first-order logic calculi.
% % We contrast our approach with theirs in Section \ref{fwrw}.
In this paper we compare methods of embedding \dRA\ in various
logical frameworks.
We explored the possibility of implementing a ``deep embedding''
in Twelf, Isabelle/CTT and Isabelle/HOL.
In the subsequent sections we describe the original Isabelle/Pure
implementation and this further work.
\section{The Isabelle/Pure implementation} \label{pure}
Isabelle is an interactive computer-based proof system,
described in \cite{isabelle-ref}.
Its capabilities include higher-order unification and
term rewriting.
It is written in Standard ML \cite{ML-definition};
the user can interact with it by
entering further ML commands, and can program complex proof sequences in ML.
As stated previously, the basic Isabelle constructs
available to a user include inference rule templates of the form
``from $\alpha_1, \alpha_2, \ldots, \alpha_n, \mbox{ infer } \beta$''.
These can be used ``forwards'', to obtain $\beta$ from the $\alpha_i$,
or ``backwards'', to reduce a given goal $\beta$ to subgoals
$\alpha_1, \alpha_2, \ldots, \alpha_n$.
Isabelle provides a number of basic operations for
backwards proof and proof-search
(\emph{tactics}\index{tactic|textbf}), as well as
\emph{tacticals}\index{tactical|textbf}
for combining these.
%Isabelle automatically keeps track of the subgoals generated by this backward
%search procedure and the user can choose to work on any subgoal.
%The user can also write complex proof tactics in ML;
%many complex tactics and tacticals are supplied with
%Isabelle or its standard object logics.
Isabelle also supports forward proof.
The inference rules collectively form a simple meta-logic:
an intuitionistic typed higher-order logic.
%typed $\lambda$-calculus.
There are three logical operators:
\texttt{==>} (implication, or deducibility),
\texttt{==} (equality, or substitutability), and
\texttt{!!} (universal quantification).
For example,
\mbox{\texttt{[| A1; A2 |]~==>~B}},
% which is an abbreviation for
% \texttt{A1 ==> (A2 ==> B)},
is the Isabelle representation for
``from $\alpha_1, \alpha_2, \mbox{ infer } \beta$''.
These operators satisfy certain properties arising from their intended meanings.
For example, since %the Isabelle proposition
\mbox{\texttt{[| A1; A2 |]~==>~B}}
means that $\beta$ can be deduced from
$\alpha_1$ and $\alpha_2$,
% and can be proved by deducing $\beta$
% from $\alpha_1$ and $\alpha_2$,
% we have that
% \mbox{\texttt{[|~(A~==>~B);~A~|] ==> B},}
% and that
it is then possible to deduce
$\beta$ from $\alpha_2$ and $\alpha_1$,
i.e., \mbox{\texttt{[| A2; A1 |] ==> B}}.
% Indeed, using Isabelle, we can prove a new rule that states
% ``if (from $\alpha_1, \alpha_2, \mbox{ infer } \beta$) is a rule,
% then (from $\alpha_2, \alpha_1, \mbox{ infer } \beta$) is a rule'',
% as shown below:
% \begin{center}
% \texttt{[| [| A1; A2 |] ==> B |] ==> ([| A2; A1 |] ==> B)}.
% \end{center}
%
%That is, \texttt{==>} can be seen as the analogue of the horizontal bar
%used to state rules in a natural deduction system.
% in which the order of premises is not significant.
Likewise, since
\texttt{A == B}
means that \texttt{A} can be replaced by \texttt{B}, or vice versa,
\texttt{==} is reflexive, symmetric and transitive.
This meta-logic is known as Isabelle's Pure theory.
In most cases the user would augment this by defining
additional constants to capture the syntax of the object logic.
For example, to capture set theory, we would add a constant \texttt{mem}
(say) to stand for the $\in$ symbol, while sequents would require
constants `\texttt{|-}' and `\texttt{,}'.
% an object logic containing operators and inference rules;
% these would usually include all the usual logical operators
% and inference rules,
% and often much more, such as operators and rules for set theory.
Several such object logics are packaged with the Isabelle distribution
\cite{isabelle-object}.
Once these syntactic elements of the object logic are in place, we can build
object-logic expressions into the $\alpha_i$ and $\beta$.
For example, in \dRA, the
$(\mbox{\dRA-}\vdash \lor )$ rule from Section 1 uses the
meta-logical operator \texttt{==>} discussed above,
and could be entered as
\begin{center}
\texttt{"\$X |- P, Q ==> \$X |- P v Q"}.
\end{center}
where the \texttt{\$} distinguishes \dRA-structures such as \texttt{X}
from \dRA-formulae such as \texttt{P}.
This rule is an instance of the form
``from $\alpha \mbox{ infer } \beta$''.
In \cite{dawson-gore-dra-jelia98}
\dRA\ was implemented in Isabelle directly on top of the Pure theory,
so that the only inference rules available are those of \dRA.
%(Thus you can't write $(P \vdash Q) \lor (R \vdash S)$, for example,
%as you could if \dRA\ were built on top of Isabelle/HOL).
As far as the Isabelle meta-logic is concerned, we can think of its atomic
formulae as the formulae (for \dRA, the sequents) of the object-logic.
In this case these are \dRA-sequents of the form $X \vdash Y$, built using
object-logic ``connectives''
`$\vdash$', `,', `;', `*', `$\bullet$', `$\land$', `$\lor$', etc.
These can be combined into more complex Isabelle
propositions using the Isabelle
meta-logic connectives \texttt{==>}, \texttt{==} and \texttt{!!}.
% Thus each Isabelle proposition
% is either a \dRA\ sequent $X \vdash Y$, or is
% made up of sequents combined into inference rules
% using the three Isabelle/Pure logical operators.
It follows that expressions such as
\texttt{\$X |- (P ==> Q)} or \texttt{A v (B ==> C)}
are not possible.
Note that the $\alpha_i$ and $\beta$ as discussed above would most commonly
simply be \dRA-sequents, but could be ``complex'' Isabelle propositions,
such as \texttt{\$X |- P ==> \$Y |- Q}.
We may declare axioms and inference rules of the \dRA\ calculus to Isabelle.
For example, the sequent $p \vdash p$ is an initial sequent (axiom) in \dRA;
thus it is declared in Isabelle as a rule, \texttt{P |- P},
with no premises.
% (For a rule with no premises, this notation is used rather than
% \texttt{[| |] ==> P |- P}.)
Similarly, each rule of \dRA\ is declared as an Isabelle rule, as in the
example of the $(\vdash \lor)$ rule above.
% This becomes an Isabelle theorem containing the metalogical
% implication operator \texttt{==>}.
%
% These, and further derived sequents and rules, are called Isabelle
% ``theorems''.
%Isabelle theorems may be object logic formulae or may
%involve Isabelle's metalogical operators.
% Thus, in the Isabelle implementation of \dRA, an Isabelle theorem
% may be either a simple sequent of \dRA,
% or a sequent \emph{rule}.
% Whereas a sequent derivable in
% \dRA\ becomes an Isabelle theorem of the form \texttt{X |- Y},
% a sequent rule becomes an Isabelle theorem containing also
% the operators \texttt{==>} (implication) or \texttt{==} (equality).
That this is a shallow embedding is reflected in the fact that the
horizontal bar of sequent rules becomes the Isabelle \texttt{==>} operator.
In a deep embedding, as in Section \ref{hol}, the
horizontal bar also becomes an object-level constant.
\comment{
% For example, the (cut) rule (shown on \page{id}) appears as
% {\par\noindent\centering
% \texttt{"[| \$?X |- ?A; ?A |- \$?Y |] ==> \$?X |- \$?Y"}
% \par\noindent}
For example, the ($\vdash \lor $) rule (shown in \S\ref{|-v}) appears as
$$\texttt{"\$?X |- ?A, ?B ==> \$?X |- ?A v ?B"}$$
in Isabelle.
(The `\verb:?:' denotes a variable which can be instantiated,
and the `\verb:$:' denotes a structural variable or constant.)
The variant $A \vdash A$ of the (id) rule (see \S\ref{otm})
appears as \texttt{"?A |- ?A"}.
The bi-directional \texttt{dcp} rule, shown in \fig{sdr},
appears as \texttt{"\$?X |- \$?Y == * \$?Y |- * \$?X"}.
}
In this shallow embedding, access to and manipulation of the shape
of \dRA\ constructs (ie, sequents, structures, formulae)
and derivations was provided only at the ML level.
For example, in \cite{dawson-gore-dra-jelia98}
we described having programmed a procedure
to perform cut-elimination for \dRA.
The input to this was a \dRA\ derivation (represented
as a tree of sequent rule instances),
and the output was a derivation not containing
an instance of the cut rule.
This required examining the shape of a derivation (represented as a tree
of \dRA\ rule instances) and of the Isabelle terms representing sequents.
While the shape of an Isabelle term is easily accessible (by ML code),
% a derivation step is represented as a function.
% a complex derivation is the composition of functions representing the
% elementary derivation steps.
% Therefore, to look at the shape of a derivation (for example, to ask
% what rule was used in the final \dRA\ inference step) involved changing the
we had to change the Isabelle code somewhat,
so as to record the elementary \dRA\ rules used in a derivation,
and to construct a derivation-tree which could be manipulated.
% while a derivation was being performed.
The aim of the work described in this paper is to perform the cut-elimination
proof entirely within a theorem prover
(rather than writing a cut-elimination program in ML).
We therefore needed to model derivation trees,
inference rules, substitution in them, and so on,
% set of elementary inference rules of the calculus
within the language of the theorem prover,
as discussed in \S\ref{intro}.
% so as to be able to say (for example)
% that every rule in a given derivation tree
% is an instance of one of the inference rules in that calculus.
\comment{
In \cite{dawson-gore-dra-jelia98} we also described using the Isabelle
\dRA\ implementation
to show the soundness of certain other calculi for relation algebra.
In these cases, we in fact showed that all the axioms and
rules of the other calculi were derivable in \dRA.
The argument that followed, namely that
therefore an entire derivation in one of those other calculi could have
been performed in \dRA, was outside the scope of the mechanical
theorem prover.
Again, doing this within the theorem prover would
have required modelling the shape of
sequents or formulae of the calculus in question, of derivations, and of
the rules used in them.
This was done by
Mikhajlova and von Wright \cite{mikhajlova-vonwright-isomorphism}
in their comparison of classical first-order logic proof systems.
}
% But note that their work assumes that the second and third
% clauses of the definitions given at
% \cite[p.~302]{mikhajlova-vonwright-isomorphism} are sound,
% and we suspect one is not.
% We are currently trying to show whether their definitions
% are derivable using their previous definitions.
\section{Dependent Type Theory implementations}
\subsection{Introduction}
In a dependent type theory,
a type may be parametrised not only over a type variable,
but also over a term variable.
This, coupled with the \emph{judgements as types} principle
(where a judgement is identified with the types of its proofs)
enables us to express the derivation of a display calculus derived rule.
A simpler example of a Curry-Howard isomorphism is
between the simply-typed $\lambda$-calculus and
a natural deduction calculus \textsf{NDInt}
for intuitionistic propositional logic.
Let $A$ and $B$ be formulae, and $\pi$ be a derivation of $A$
in \textsf{NDInt}.
Suppose also that $A \to B$ is derivable
in \textsf{NDInt},
and consider an \textsf{NDInt}-derivation of $A \to B$
%in the Natural Deduction style,
where one assumes $A$ and derives $B$.
That is, it is a derivation of $B$ in which $A$ is regarded as true;
at the points where $A$ is used, it would contain some notation $\xi$ meaning
``true by assumption''.
Let $\rho(\xi)$ be this derivation of $B$.
%where $x$ is a place-holder for some
%notation meaning ``true by assumption'',
%as will appear in the derivation of $B$ wherever $A$ is used.
So if we substitute $\pi$ (the derivation of $A$) for $\xi$ in $\rho(\xi)$,
we get a derivation $\rho(\pi)$
of $B$ which does not rely on $A$ as an assumption.
Then, using $\pi : A$ to mean ``$\pi$ is a derivation of $A$'',
we can write the rule
\begin{center}
\slrule { \pi : A \qquad \rho(\xi) : A \to B } { \rho(\pi) : B }
\end{center}
Erasing the annotations $\pi$, $\rho$ and `:'
gives the well-known $(\to E)$ natural deduction rule for intuitionistic logic;
the rule as it stands also shows how to derive $B$.
As indicated above, we can equally think of $\rho(.)$
as a derivation of $A \to B$
or as a function which takes,
as argument, a derivation of $A$ such as $\pi$, and returns a derivation of $B$.
If we also think of $A$ as the type ``derivations of proposition $A$''
(and similarly for $B$)
we get the functional type-theory interpretation,
under which $A \to B$ means the type of functions which take an argument
of type $A$ and return a result of type $B$.
(Conveniently, the same symbol `$\to$'
is conventionally used for both purposes.)
Dependent types extend what can be done in the simply-typed
$\lambda$-calculus in two distinct ways.
Firstly, consider a type $I$ of individuals, and a parametrised type
$A(i)$; that is, for each individual $i \in I$, there is a type $A(i)$.
Then the dependent type $A$ corresponds to a predicate $A$ on $I$,
and the type $A(i)$ to the proposition $A(i)$.
\comment{
Consider a function $\rho$ whose domain is $I$, such that for each $i \in I$,
$\rho(i) \in A(i)$.
The type of such a function is written $\prod i : I . A(i)$
(denoting how the \emph{type} of $\rho(i)$ \emph{depends}
on the \emph{value} of $i$).
Now let $A$ be a predicate, and therefore each $A(i)$ be a proposition
(taking the value true or false),
and let each $\rho(i)$ be a derivation of $A(i)$.
That is, $\rho$ is a function which, for each individual $i$,
gives a derivation of $A(i)$; we can think of such a $\rho$ as a derivation of
$\forall i A(i)$.
At the same time, if we think of each
type $A(i)$ as the type ``derivations of proposition $A(i)$'',
then the functions $\rho$ of type
$\prod i : I . A(i)$ are the derivations of $\forall i A(i)$.
}
Secondly, we install an object logic at the level of terms,
so that the expressions (formulae or sequents)
of the object logic become terms of the
dependently-typed $\lambda$-calculus.
That is, we can install a syntax for terms using the syntax
of the chosen object logic, and declare a type constructor $P(.)$,
so that for each term $t$, $P(t)$ will
mean ``derivation of expression $t$''.
This was our approach for modelling \dRA.
% (though instead of $P(t)$, we used $X \vdash Y$,
% the two arguments $X$ and $Y$ being
% the left and right-hand sides of a sequent).
Regarding an expression (formula or sequent) $t$
as derivable when we have a term of type $P(t)$
has certain consequences.
For example, since from functions of type
$X \to Z$, $U \to V \to W$ and $S \to S \to T$
we can construct functions of type
$X \to Y \to Z$, $V \to U \to W$ and $S \to T$,
we unavoidably have weakening, exchange and contraction.
Note, however, that in our formulation of \dRA, these are in the
meta-logic level, not in the object logic.
Two theories based on dependent types are the
Edinburgh Logical Framework (LF) \cite{harper-honsell-plotkin-framework}
and Constructive Type Theory (CTT) \cite{martin-lof-CTT}.
\subsection{The Twelf implementation}
Twelf \cite{pfenning-schurmann-twelf} is an implementation of the
Edinburgh Logical Framework (LF) \cite{harper-honsell-plotkin-framework}.
This is based on a typed $\lambda$-calculus with dependent types.
Twelf is written in Standard ML, and the user can interact with the system
using its ``ML Interface'', but only a limited range
of functions is available.
Figure \ref{fig:twelf-source}
is an extract of our Twelf source file.
\begin{figure}[htbp]
\normalsize
\begin{center}
\begin{tabular}{c@{\quad}l}
1 & \texttt{struct : type. } \\[-1.5ex]
2 & \texttt{|- : struct -> struct -> type.} \\[-1.5ex]% \%infix none 10 |-.} \\
3 & \texttt{* : struct -> struct.} \\%\%prefix 200 *.} \\
4 & \texttt{\% Display postulates } \\[-1.5ex]
5 & \texttt{sA : (* X |- Y) -> (* Y |- X).} \\[-1.5ex]
6 & \texttt{sS : (X |- * Y) -> (Y |- * X).} \\
7 & \texttt{\% Derived structural rules} \\[-1.5ex]
8 & \texttt{ssASl = [D] sA (sS D).} % \% (X |- * * Y) -> (* * X |- Y)}
\end{tabular}
\end{center}
\caption{Sample Twelf source code} \label{fig:twelf-source}
\end{figure}
Line~1 declares
\texttt{struct} as a new type to represent the \dRA\ structures.
Line~2 declares
\texttt{|-} as a binary type constructor,
taking two \emph{term} arguments of type
\texttt{struct} and returning a \emph{type}.
Following the description above,
we should declare
\texttt{sequent} as a new type to represent \dRA\ sequents,
and \texttt{P} as a function
taking a \emph{term} argument of type
\texttt{sequent} and returning a \emph{type}, thus:
\begin{quote}
\texttt{sequent : type. } \\
\texttt{|- : struct -> struct -> sequent. } \\% \%infix none 10 |-.
\texttt{P : sequent -> type.}
\end{quote}
The actual code in Figure \ref{fig:twelf-source} abbreviates this
by removing the type constructor
\texttt{P} and (in effect) changing \texttt{|-}
from a term constructor to a type constructor.
Thus, for structures \texttt{X} and \texttt{Y},
the construct
\texttt{X |- Y} represents derivations of the sequent $X \vdash Y$.
(We did this only to simplify the printed output).
Line~3 declares
\texttt{*} as a unary structure operator.
Lines 4 and 7 are comments.
In lines 5 and 6
the terms \texttt{sA} and \texttt{sS} are declared with the types
%showing that they denote
that represent derivations of the \dRA\ rules
(see \cite{gore-relalg}) shown below:
\begin{center}
\slrule { *X \vdash Y } { *Y \vdash X } (\ttdef{sA}) \qquad
\hspace{20mm}
\slrule { X \vdash *Y } { Y \vdash *X } (\ttdef{sS}) \qquad
\end{center}
We can think of these declarations as assertions that there are
derivations (which we name \texttt{sA} and \texttt{sS}) of these rules,
or as defining these as ``primitive'' rules (one-step derivations).
Now, treating \texttt{sA} and \texttt{sS} (which stand for
``star in the antecedent'' and ``star in the succedent'')
as functions each
taking a derivation of a sequent
to a derivation of another sequent, they can be composed
for some given sequent $U \vdash * * V$
as follows
$$
\stackrel {\mbox{derivation of}}{U \vdash * * V} \quad
\stackrel{\texttt{sS}} \longrightarrow \quad
\stackrel {\mbox{derivation of}}{* V \vdash * U} \quad
\stackrel{\texttt{sA}} \longrightarrow \quad
\stackrel {\mbox{derivation of}}{* * U \vdash V}
$$
The definition of \texttt{ssSAl} on line 8 does this; the notation means
$\lambda D. \textit{sA} (\textit{sS} \ D)$.
Twelf computes the type of \texttt{ssSAl},
giving $(S1 \vdash * * S2) \to (* * S1 \vdash S2)$.
Note that $X, Y, S1$ and $S2$ are variables ($S1$ and $S2$ being names chosen
by Twelf).
In a manner akin to Prolog \cite{prolog-standard},
Twelf allows one to make a query such as
\begin{center}
\texttt{
\_ssASl : (X1 |- * * Y1) -> (* * X1 |- Y1).}
\end{center}
\noindent
which searches for a term of the specified type (i.e., searches for
a derivation of the stated rule)
and instantiates \texttt{\_ssASl} with that term.
In this example, because Twelf allows substituting for
\texttt{X1} and \texttt{Y1} in the query as well as for the variables
\texttt{X} and \texttt{Y}
in the declarations of \texttt{sA} and \texttt{sS},
the search proceeds along an infinite branch in the wrong direction.
On the other hand, the code
\begin{verbatim}
%theorem
ssASl_th : exists {DerivedRule:{S1:struct} {S2:struct}
(S1 |- * * S2) -> (* * S1 |- S2)} true.
%prove 2 {} (ssASl_th DerivedRule).
\end{verbatim}
\noindent
successfully uses Twelf's theorem prover to find the derivation, returning
\begin{verbatim}
/ssASl_th/: ssASl_th ([S1:struct] [S2:struct]
[D:S1 |- * * S2] sA (sS D)).
\end{verbatim}
\noindent
(Here the variables \texttt{S1} and \texttt{S2} need to be explicitly
abstracted over -- in the earlier code they were free variables).
Twelf's theorem prover is not extensively documented,
and is stated to be under active development,
with the proof search component expected to undergo major changes.
The proof search strategy can be controlled by the user to only a limited
extent, whereas we have found that considerable user control
(using tactics differing from proof to proof) is generally
necessary in our work.
We found that certain aspects of Isabelle which we very much appreciated
are absent from Twelf.
Isabelle offers a substantial number of user ``commands'', in the form
of documented ML functions, which enable the user to programme
proof procedures, or to examine the shape of a term or type expression.
For example, one might write a tactic which explicitly examines the
current proof state and then decides which of several tactics to apply.
There are many expressive ``glue'' functions for combining tactics
in Isabelle.
Inadequacies in the documentation, or in the selection of functions
documented, can often be circumvented by looking at the source code,
which can also be a great help to the user in programming his/her own tactics.
At this stage,
Twelf does not appear to offer comparable capabilities to the user.
Thus, although the theorem prover did successfully find the derivation above,
we felt uncertain that we could use it to find
all required derivations, primarily because of the lack of user control over a
proof.
\subsection{The Cut-Elimination Theorem in Twelf}
At this point we should refer to the proof of a cut-elimination theorem,
using Twelf, described in \cite{pfenning-structural-cut-elimination}.
This uses a rather ingenious representation of sequents;
a cut-free proof of a traditional Gentzen
sequent $A \vdash B$ is represented as
the type \texttt{neg A -> pos B -> @}, and
a cut-free proof of it as
\texttt{neg A -> pos B -> \#}, as explained in
\cite{pfenning-structural-cut-elimination}.
The two rules shown below left
are both represented as the type shown on the right:
\begin{center}
\slrule {\Gamma, A \vdash \Delta} {\Gamma, A \& B \vdash \Delta}
\quad
\slrule { A \vdash } { A \& B \vdash }
\quad
\texttt{(neg A -> \#) -> (neg (A and B) -> \#)}
\end{center}
In fact, the second rule shown would be directly represented by
the type shown,
but the first rule shown could be directly represented by
\begin{quote}
\texttt{(neg $\Gamma$ -> pos $\Delta$ -> neg A -> \#) -> \\
\mbox{} \quad (neg $\Gamma$ -> pos $\Delta$ -> neg (A and B) -> \#)}
\end{quote}
However the existence of a term (function) of the type shown first
trivially implies the existence of a term (function) of the type
shown second.
Further constructed types represent stages of the transformation of
a proof into a cut-free proof, and a termination checker checks that the
function performing the entire transformation from a proof to a cut-free proof
terminates.
However, this termination checker only checks that the function
would not run forever.
It does not check that it terminates with a cut-free derivation --
it does not complain if the code is run with some cases deleted.
This work on the cut-elimination theorem is
based on an ingenious way of representing the problem in a way that fits into
Twelf;
even so, getting a proof of the cut-elimination theorem from it relies on
the heuristically-based termination checker and upon a manual check that
all possible cases for the rules used just above the ``cut''
have been covered.
On the other hand, the Twelf type-checker proves that each individual step
is correct, in that the changed derivation in fact does derive
the sequent which it purports to derive.
Thus this proof of cut-elimination lies between our previous work
(where we just programmed a cut-elimination function in ML)
and a fully formal proof.
\subsection{The Isabelle/CTT implementation}
We then turned to the \texttt{CTT} theory of Isabelle, since
it is based on a logic which is very similar to
that of Twelf, in that it includes dependent types.
%
We implemented \dRA\ similarly to its implementation in Twelf,
so that we had, for example, (omitting premises of the form
\texttt{?X : str})
%, which were fairly ubiquitous)
\texttt{
\begin{quote}
"sA : * ?X |- ?Y --> * ?Y |- ?X" : thm \\
"sA oo sS : ?X |- * * ?Y --> * * ?X |- ?Y" : thm
\end{quote}
}
\comment{
\begin{center}
\begin{tabular}{l}
"sA : * ?X |- ?Y --> * ?Y |- ?X" : thm \\
"sA oo sS : ?X |- * * ?Y --> * * ?X |- ?Y" : thm
\end{tabular}
\end{center}
}
\noindent
where \texttt{oo} denotes function composition.
Tactics were written to determine the type of a term
(ie, to determine the derived rule obtained by a given combination of rules),
by solving a goal such as \texttt{sA oo sS : ?t}
(which contains the type variable \texttt{?t}).
We explored tactics
to search for a term of a given type (ie, a proof of a given term),
by solving a goal such as the one below,
which contains the term variable \texttt{?P}.
\begin{center}
\texttt{"?P : ( X |- * * Y) --> ( * * X |- Y)"}.
\end{center}
The \texttt{CTT} theory of Isabelle is not as
extensively developed as some other
theories are; we found it necessary to prove some general but elementary
theorems.
%\subsection{Dependent type theory -- conclusions}
\subsection{Conclusions}
We experimented with Twelf because it had appeared that we would get
our hands on the proofs
``for free'', in that in the course of deriving a sequent or sequent rule,
we produce a term which embodies the derivation performed.
Given our intention to do a fully mechanised proof of the cut-elimination
theorem (in which we manipulate derivations extensively)
it had seemed that this
feature of Twelf would be useful.
Our work with Isabelle/CTT seemed to indicate that the same things could
be done in it as in Twelf (with some effort to produce appropriate derived
rules and tactics).
However we realised that although we could produce a term
which represented the
derivation and showed the elementary steps from which it is composed,
we could not analyse derivations, \emph{in the logic of the theorem prover},
in the required ways: for example, to ask which rule was used in the final
step of the derivation.
In the light of this, there seemed to be no benefit in pursuing an
implementation using dependent type theory.
We have referred to a proof in Twelf of the
cut-elimination theorem, noting that it was based on
an ingenious way of representing the problem in a way that fits into
Twelf, and that it was not a fully formal proof.
Since we are aiming for a formal proof, using techniques which
would apply equally for other proof-theoretic results,
we felt that this cut-elimination proof does not indicate that
dependent type theory would be powerful enough for our needs.
\section{The Isabelle/HOL implementation} \label{hol}
\texttt{HOL} is an Isabelle theory based on the higher-order logic
of Church \cite{church-types} and the HOL system of Gordon
\cite{HOL-introduction}.
Thus it includes quantification and abstraction over higher-order
functions and predicates.
The \texttt{HOL} theory uses Isabelle's own type system and function
application and abstraction (that is, object-level types and functions
are identified with meta-level types and functions).
Isabelle/HOL contains constructs found in functional programming languages
(such as \texttt{datatype} and \texttt{let}) which greatly facilitates
re-implementing a program in Isabelle/HOL, and then reasoning about it.
However limitations (not found in, say, Standard ML itself)
prevent defining types which are empty or which are not sets,
or functions which may not terminate.
Isabelle/HOL turned out to be entirely suitable for the
deep embedding of \dRA.
The most significant factor leading us to this conclusion
is the facility for \texttt{datatype}
type declarations and associated primitive recursive function definitions,
which was enormously helpful.
For example, we model \dRA\ structure expressions (see
\cite{gore-relalg}) as follows
\begin{verbatim}
datatype structr = Comma structr structr
| SemiC structr structr
| Star structr
| Blob structr
| I
| E
| Structform formula
| SV string
\end{verbatim}
The first six lines correspond to the structure operators of \dRA\ and
the next line is for ``casting'' a formula of \dRA\ as a structure of \dRA.
The last line had no analogue in the shallow embedding, where
we just used an Isabelle variable such as \texttt{?S}
to refer to any \dRA-structure.
In the deep embedding, \texttt{SV ``X``} refers to the structure
variable named \texttt{X} appearing in (say) the statement of a rule.
Thus we can now write \texttt{SV ?V}, which Isabelle will match with a
structure expression consisting of a single structure variable,
or \texttt{?S} to match any structure.
We also defined functions (in Isabelle/HOL)
which (for example) find all the variables in a structure expression,
or substitute a given structure for such a variable.
We also wrote such functions for the work described in \S\ref{pure},
but in Standard ML, not in Isabelle.
\comment{
On the other hand, when writing Isabelle tactics which involve analysing
the shape of a rule, we could write (for example)
\texttt{((SV ?V), ?S)} to match any structure expression consisting
of two substructures joined by the comma operator
(infix `\texttt{,}' is alternative notation for prefix \texttt{Comma}),
of which the first must be a structure variable.
This matching and any related substitution is performed by Isabelle.
}
For describing the structure of a derivation we have
\begin{verbatim}
datatype pftree = Pr sequent rule (pftree list)
| Unf sequent
\end{verbatim}
where \texttt{Pr seq rule pts} is a derivation of the sequent \texttt{seq},
the last step of the derivation uses the \dRA\ rule \texttt{rule},
and the premises of that last step are the conclusions of the
derivations in the list \texttt{pts}. \ \
\texttt{Unf} (``unfinished'') means that a leaf is a sequent which is not
an axiom, so is an assumption (or a ``premise'') of the derivation tree as a
whole.
\comment{
We define functions which check that such a structure is a valid
derivation (for example, that the premises of an instantiated
rule really are the
conclusions of the trees above, and that each step uses a
legitimate rule).
For example, to list the ``premises'' of the whole derivation tree:
\begin{verbatim}
primrec
"premsPT (Unf seq) = [seq]"
"premsPT (Pr seq rule pts) = premsPTs pts"
"premsPTs [] = []"
"premsPTs (pt # pts) =
(premsPT pt @ premsPTs pts)"
\end{verbatim}
Each datatype declaration generates a number of theorems
which are made available for proofs, and of which some are installed
in rule sets used by the more automated proof tactics.
These theorems include ones expressing that a \texttt{datatype} type
is a disjoint union and that the constructors are 1-1 functions,
and one expressing a structural induction principle.
}
We give examples of results which show how we can reason about
provability in this embedding.
Here \texttt{IsProvableR rules prems concl} means that
sequent \texttt{concl} can be derived from sequents \texttt{prems}
using rules \texttt{rules}, and \texttt{IsProvable rules rule}
means that the conclusion of the
statement of \texttt{rule} may be derived from its premises using
\texttt{rules}.
\begin{verbatim}
val IsProvableR_trans =
"[| IsProvableR rules prems' concl ;
ALL p:prems'. IsProvableR rules prems p |] ==>
IsProvableR rules prems concl"
val IsProvableR_deriv =
"[| ALL rule:rules'. IsProvable rules rule ;
IsProvableR rules' prems concl |] ==>
IsProvableR rules prems concl"
\end{verbatim}
While we think that the corresponding results,
for particular instances of the sequents and derivations involved,
could be proved more easily in Twelf than in Isabelle,
we cannot see how to express the general results in Twelf or Isabelle/CTT,
without, in effect, doing a deep embedding in those systems,
as mentioned in the next section.
\section{Further work and Related Work}
\label{fwrw}
A proof of the cut-elimination theorem for \dRA\ has been completed,
and will be reported in another paper.
Using the deep embedding into Isabelle/HOL,
we were able to specify and reason about
derivability and derivation trees.
\comment{
for example, we expressed
the requirement that each step
of a derivation uses a substitution instance of a \dRA\
rule found in a given set.
Reasoning about derivability and derivation trees is rather intricate.
We note that in other work of this nature
\cite[p.~302]{mikhajlova-vonwright-isomorphism}
the authors chose to
state, rather than prove,
principles concerning the transitivity of provability,
and provability from derived rules.
(See the second and third
clauses of the definitions given at
\cite[p.~302]{mikhajlova-vonwright-isomorphism}).
We have been able to
prove corresponding rules from our definition of a proof tree.
These are the theorems \texttt{IsProvableR\_trans}
and \texttt{IsProvableR\_deriv} given near the end of Section \ref{hol}.
Proving rather than stating such rules is preferable as it
avoids the possibility of stating an unsound rule.
Interestingly, we doubt the suitability of the requirement for a derived rule
used in \cite{mikhajlova-vonwright-isomorphism},
in the third clause of their definition of \texttt{IsProvable}.
Looking at the code given in the paper, the phrase
\begin{verbatim}
(?inf. CorrectRuleInst {r} inf /\
IsProvable rSet (InfHyps inf) (InfConcl inf))
\end{verbatim}
suggests requiring
that only one (not all) of the instances of the derived rule
\texttt{r} be provable,
which would allow using \texttt{P ==> Q} as a derived rule because
\texttt{False ==> True} is provable.
}
Clearly the Isabelle/HOL logic is
well-equipped for writing specifications which describe
the process of performing proofs by substituting for variables in the
rules of a given logical calculus.
We attribute this to the richness of the logic, containing not only
sum and product types, but recursive datatypes
(as in the programming languages ML and Haskell),
and to the fact that Isabelle/HOL
provides the theorems which characterise each such type.
%It was easy to specify the behaviour of a program
%written in functional programming style, and for reasoning about it.
It was thus a suitable choice for this project.
As one of the referees has pointed out, our final embedding of \dRA\
into Isabelle/HOL is very deep, making use of recursive datatypes and
other features of Isabelle/HOL. The need for such a deep embedding was
discovered in the process of trying out the various shallow embeddings
and finding them unsuitable for our proof-theoretical needs. An
interesting question is whether an equally deep embedding of \dRA\ would
be feasible in Isabelle/Pure, Isabelle/CTT and Twelf. Our personal
view is that this would be possible, but that such an embedding would
be extremely complicated and difficult because we would have to
virtually reimplement essential features
such as recursive datatypes which are not available in Isabelle/Pure,
Isabelle/CTT or Twelf.
\comment{
Another concept requiring further exploration is using the types
of Twelf or Isabelle/CTT to express more than just the sequent
being proved.
For example,
one might let $t : P(X \vdash Y, \rho, n)$ denote that $t$ is a derivation
of $X \vdash Y$, of length at most $n$,
where the last rule used is $\rho$.
Clearly, however, to get all the expressiveness we need in this way
would require far more complex type expressions than in this example:
doing this may be tantamount to doing
a deep embedding without the helpful features of Isabelle/HOL,
such as \texttt{datatype}s.
}
We aimed to produce a ``deep'' embedding of display calculi, sufficient
(for example) to enable a mechanised proof of the cut-elimination theorem.
While we have done this (in a sense), our work has
elucidated some further points.
Belnap's cut-elimination theorem for display calculi is generic. It
applies to all display calculi whose rules satisfy certain conditions.
We have modelled a particular display calculus, \dRA\ (for relation
algebra). Accordingly, we have proved the
cut-elimination theorem for \dRA. While we can identify the
properties of the \dRA\ rules used in the proof, and match these with
Belnap's conditions, we have not proved the theorem as stated by
Belnap (which applies to \emph{any} display calculus). To model an
arbitrary display calculus in Isabelle/HOL would be a further (and
harder) activity.
\comment{
Secondly, Belnap's conditions on the rules of the display calculus
are listed by Kracht \cite{kracht-power},
who states ``These conditions actually need some exegesis''.
Our work will produce a precise, formal,
statement of those properties of the rules which we actually
use in the proof of the theorem.
}
\section{Conclusion}
We have reviewed earlier work which described a shallow embedding of the \dRA\
calculus into the Isabelle/Pure logic, and noted its drawbacks for
proving proof-theoretic results in Isabelle itself.
Attracted by the notion that the tools based on dependent type theory
would give expressiveness about proofs ``for free'', we then looked at
the Twelf tool.
While the Twelf tool is still under active development,
we noted how a similar logic is available in Isabelle/CTT, which
offers the advantages of Isabelle, such as access to a wide
range of documented ML functions for programming proof tactics.
However we also found that, while Twelf and Isabelle/CTT facilitate
identifying the derivation of a sequent or derived rule,
they do not facilitate reasoning about the shape of that derivation,
or of the sequents found in it.
Thus we turned to Isabelle/HOL, which appears to offer the best facility
for deeply embedding a calculus such as \dRA, and reasoning about derivability,
derivation trees and the associated proof-theoretic concepts we needed.
Needless to say, this was the advice we received from Larry Paulson from the
start.
\begin{ack}
We are grateful to Paul Jackson, Randy Pollack and Mark Staples for
many useful discussions on the notions of ``shallow'' and ``deep''
embeddings, to Frank Pfenning and Larry Paulson for answering our
questions about Twelf and Isabelle, and to Jim Grundy for helpful
comments.
\end{ack}
\comment{
} %endcomment
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Here begins the end matter
\begin{thebibliography}{10}
\bibitem{belnap-display}
N~D {Belnap}.
\newblock Display logic.
\newblock {\em Journal of Philosophical Logic}, 11:375--417, 1982.
\bibitem{boulton-et-al-hdl}
R~J Boulton, A~D Gordon, M~J~C Gordon, J~R Harrison, J~Herbert, and J~van
Tassel.
\newblock Experience with embedding hardware description languages in {HOL}.
\newblock {\em Proc.\ Intl.\ Conf.\ on Theorem Provers in Circuit
Design: Theory, Practice and Experience}, IFIP Trans.\
\newblock volume A-10:129--156. North-Holland/Elsevier, 1992.
\bibitem{church-types}
A~Church.
\newblock A formulation of the simple theory of types.
\newblock {\em Journal of Symbolic Logic}, 5:56--68, 1940.
\bibitem{dawson-gore-dra-jelia98}
J~Dawson and R~Gor{\'{e}}.
\newblock A mechanised proof system for relation algebra using display logic.
\newblock In {\em JELIA98: Logic in Artificial Intelligence},
\newblock LNAI 1489:264--278. Springer, 1998.
\bibitem{prolog-standard}
P~Deransart, A~Ed-Dbali, and L~Cervoni.
\newblock {\em Prolog: The Standard}.
\newblock Springer-Verlag, 1996.
\bibitem{elmasri-navathe-database}
R~A Elmasri and S~B Navathe.
\newblock {\em Fundamentals of Database Systems}.
\newblock Benjamin/Cummings, Redwood City, 2 edition, 1994.
\bibitem{gallier-logic}
J.~H. Gallier.
\newblock {\em Logic for Computer Science: Foundations of Automatic Theorem
Proving}.
\newblock John Wiley and Sons, 1987.
\bibitem{HOL-introduction}
M~J~C Gordon and T~F Melham.
\newblock {\em Introduction to HOL: a Theorem Proving Environment for Higher
Order Logic}.
\newblock Cambridge University Press, 1993.
\bibitem{gore-relalg}
R~Gor{\'{e}}.
\newblock Cut-free display calculi for relation algebras.
\newblock In {\em CSL96: Computer Science Logic}, LNCS 1258:198--210,
\newblock Springer, 1997.
\bibitem{gore-sld-igpl}
R~Gor{\'{e}}.
\newblock Substructural logics on display.
\newblock {\em Logic Journal of the Interest Group in Pure and Applied Logic},
6(3):451--504, 1998.
\bibitem{harper-honsell-plotkin-framework}
R~Harper, F~Honsell, and G~Plotkin.
\newblock A framework for defining logics.
\newblock {\em Journal of the ACM}, 40:143--184, 1993.
\bibitem{maddux-introductory}
R~D Maddux.
\newblock Introductory course on relation algebras, finite-dimensional
cylindric algebras, and their interconnections.
\newblock In {\em Algebraic Logic}, {\em Colloquia Mathematica
Societatis Janos Bolyai}, Volume~54:361--392. North-Holland, 1991.
\bibitem{martin-lof-CTT}
P~Martin-L\"{o}f.
\newblock {\em Intuitionistic Type Theory}.
\newblock Bibliopolis, 1984.
\bibitem{mikhajlova-vonwright-isomorphism}
A~Mikhajlova and J~von Wright.
\newblock Proving isomorphism of first-order proof systems in {HOL}.
\newblock In {\em Theorem Proving in Higher-Order Logics}, LNCS
1479:295--314.
\newblock Springer, 1998.
\bibitem{mili-relational}
A~Mili.
\newblock A relational approach to the design of deterministic programs.
\newblock {\em Acta Informatica}, 20:315--328, 1983.
\bibitem{ML-definition}
R~Milner, M~Tofte, R~Harper, and D~MacQueen.
\newblock {\em The Definition of Standard ML (Revised)}.
\newblock The MIT Press, 1997.
\bibitem{isabelle-ref}
L~C Paulson.
\newblock {\em The Isabelle Reference Manual}.
\newblock Computer Laboratory, University of Cambridge, 1999.
\bibitem{isabelle-object}
L~C Paulson.
\newblock {\em Isabelle's Logics}.
\newblock Computer Lab.\, Univ.\ of Cambridge, 1999.
\bibitem{pfenning-structural-cut-elimination}
F~Pfenning.
\newblock Structural cut elimination.
\newblock In {\em Proc.\ Tenth Annual Symposium on
Logic in Comp.\ Sci.\ }, pages 156--166. IEEE Computer Society Press,
1995.
\bibitem{pfenning-schurmann-twelf}
F~Pfenning and C~Sch\"{u}rmann.
\newblock System description: Twelf - a meta-logical framework for deductive
systems.
\newblock In CADE-16 {\em Proc.\ 16th Intl.\ Conf. on Automated
Deduction },
\newblock LNAI 1632:202--206. Springer, 1999.
\bibitem{shankar}
Natarajan Shankar.
\newblock {\em Metamathematics, Machines, and Goedel's Proof}.
\newblock Cambridge University Press, 1994.
\bibitem{wansing-displaying-modal-logic}
Heinrich Wansing.
\newblock {\em Displaying Modal Logic}, volume~3 of {\em Trends in Logic}.
\newblock Kluwer Academic Publishers, Dordrecht, August 1998.
\end{thebibliography}
\end{document}