# Research

### Undergraduate Research Project

Natural Language Question Answering Over Triple Knowledge Bases Supervised by Scott Sanner

### Honours Research

Network Topology Tomography Supervised by Tiberio Caetano

### Conference Publications

NIPS 2012:
A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation
A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovasz extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset.

Appendix

Code

ICML 2012: A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods with Fast Maximum Entropy Training Item neighbourhood methods for collaborative filtering learn a weighted graph over the set of items, where each item is connected to those it is most similar to. The prediction of a user's rating on an item is then given by that rating of neighbouring items, weighted by their similarity. This paper presents a new neighbourhood approach which we call item fields, whereby an undirected graphical model is formed over the item graph. The resulting prediction rule is a simple generalization of the classical approaches, which takes into account non-local information in the graph, allowing its best results to be obtained when using drastically fewer edges than other neighbourhood approaches. A fast approximate maximum entropy training method based on the Bethe approximation is presented which utilizes a novel decomposition into tractable sub-problems. When using precomputed sufficient statistics on the Movielens dataset, our method outperforms maximum likelihood approaches by two orders of magnitude.

ICML 2014:
Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
Recent advances in optimization theory have shown that smooth strongly convex
finite sums can be minimized faster than by treating them as a black box "batch" problem.
In this work we introduce a new method in this class with a theoretical convergence
rate four times faster than existing methods, for sums with sufficiently many terms.
This method is also amendable to a sampling without replacement scheme that in practice
gives further speed-ups. We give empirical results showing state of the
art performance.

Appendix

# Software

### Vendetta

A prototype open source, cross-platform text editor built on Javascript and HTML5 technologies.

### phessianfree

Hessian free optimization in python for smooth unconstrained problems.