Research

Below is a fairly potted summary of some of my current and previous research interests. Further details can be a list of my publications or at my machine learning research blog: inductio ex machina.

Representations for Learning Tasks

My current research project is looking at ways in which we can organise the large variety of learning tasks found in machine learning and relate them to each other. Eventually, we hope to have a solid theoretical understanding of the relationships between classification, regression, probability estimation, clustering, ranking, manifold learning, and other types of tasks in order to identify the “canonical” learning problems.

I’ll be presenting some of the initial results that Bob Williamson and I have developed at the 2007 NIPS workshop on Principles of Learning Problem Design. This connects on work on reductions and ROC curves with some interesting representational results for losses and statistical information. I’ve put the workshop slides here for those that missed the talk.

I also spoke about some representation results at a NICTA lunchtime seminar on May 22nd, 2008. The slides are also available here to download.

PhD Research

My PhD thesis investigated the use of transfer learning as a way to improve rule evaluation when training examples are scarce. The key idea is to sample and evaluate rules from related tasks in order to learn a prior distribution over contingency tables based on the syntactic features of rules. These priors are then combined with the evaluation of rules on the target task in order to improve the learner’s performance when data is limited.

This approach was analysed and implemented as well as empirically verified on a range of problem domains including predicting mutagenesis and carcinogenesis, heart disease, reading preferences and chess.

The thesis can be downloaded from the publications section of this site. Some of the code used to implement these ideas is also available.