For the general area of my research, see my research pages and get in touch if you
think that these match up with your want to do for a PhD. Details on
scholarships and admissions at ANU can be found on the
university-wide web
pages.
Here's a non-exhaustive selection of research topics that I personally find
fascinating.
Verifiable Vote Counting
Verifiable vote counting is one of the cornerstones of a trustworthy
electronic voting system. Correctness and trust can be achieved by
understanding voting protocols, specified textually in legislation,
as
formal rules. Vote counting can then be cast as successive rule
application, where the sequence of rule application serves as an
independently verifiable certificate attesting to the correctness of
the count.
Coalgebraic Knowlede Representation
This is the subject of a grant that has been awarded
(see the pages on the
COOL project) where
the aim is to develop both theory and an implementation of an
automated reasoning system that allows to deal with feature rich
logics. This project funds one PhD studentship -- do get in touch if
you are interested.
Eating, or computing with non-wellfounded data
The term "eating" is a colloquial form to refer to functions that
process infinite data streams. Such data streams arise in various
areas: they could be the decimal expansions of real numbers, or the
sequence of signals sent through a communication channel. Processing
infinite streams then implements a function that takes one (or more)
streams as an input, and produces another (infinite) stream as
output. Clearly we have to ensure that streams flow (i.e. the output
stream does not terminate), and we have to do this in a
computational way. For streams, this is described e.g. in the paper
Representations of stream
processors using nested fixed points with Peter Hancock and Neil
Ghani. However there's more to life than just streams: it is easy to
see that a stream processing function can itself be represented as
an infinite tree, so that the processing of stream processing
functions (or higher order function on streams) again is an instance
of computation over infinite data types. Interesting questions in
that area concern the computability of (higher-order) stream
processors, the complexity of these operations, and applications:
how can one define a (higher order) stream processing function that
represents intergration?
Tableaux versus Automata for Modal Logics
As we know, modal logics are usually decidable, that means given a
modal formula, there is a decision procedure (a program) that --
given the formula as an input -- determines whether the formula is
satisfiable, i.e. it is valid in at least one point in at least one
model. Dually, this gives universal validity: if the negation of a
formula is not satisfiable, then it holds at all points in all
models. To establish decidability of a particular logical calculus,
there are two standard approaches. The first approach proceeds by
giving a complete axiomatisation of the logic under consideration
(usually via a tableau or sequnt calculus) and then gives an
effective procedure (usually backwards proof search) to determine
whether there exists a proof of this formula. The second approach
works without any axiomatisation, but instead converts every formula
to an automaton, with the following property: the automaton accepts
a point in a model if and only if this point satisfies the formula.
To see whether a formula is satisfiable, it therefore suffices to
determine whether the automaton accepts at least one point in at
least one model -- this is the ``emptyness problem'' of the
associated class of automata. While both approaches seem to be
fundamentally different at the outset, one sees that there are
certain similarities: in particular, constructing a decision
procedure from a tableau calculus is strikingly similar to
constructing an automaton, and solving the emptyness problem at the
same time. Now the interesting question is whether,
and how, this correspondence can be made precise. Can we convert
automata into tableaux and vice versa? What are the advantages, both
of a theoretical and practica nature, of either approach? First
steps have been made at a recent workshop on
Tableaux versus Automata as decision procedures, but many
questions still remain open.
Machine Learning over Modal/Description Logics
Given
that modal or description logics are expressive yet tractable ways
to represent knowledge, can we do machine learning over (modal)
knowledge bases? The basic problem is roughly the following: Given a
set of logical formulas that represent knowledge about a particular
application domain (say, roads in South Kensington) and a yet
unspecified property (say, the notion of safety), can we
automatically construct a formal definition of this property given a
set of positive examples (roads that we regard as safe) and negative
examples (those that are deemed unsafe). More formally: given a set
of positive / negative individuals, can we find automatically induce
a formula that is satisfied by all the positive, but by none of the
negative examples? From a different perspective, the question can be
seen as "how can we generalise inductive logic programming to modal
logics?".