All code below is written in simple object-oriented C++. See readme in the package for installation and usage.
|1||Conditional Random Fields for Policy Gradient Multi-agent Reinforcement Learning|
|[tar.bz2 700 KB] [paper]|
|This package implements the tree sampling for inference in conditional random fields. With the sampled states and approximate expectations, the package implements the natural actor-critic which performs collaborative multi-agent reinforcement learning. Three simulators are provided namely grid gate control, sensor network, and traffic light control.|
|2||Faster Rates for Training SVMs using Optimal Gradient based Methods|
|[tar.bz2 800 KB] [paper]|
|This package implements the three versions of Nesterov's
first-order methods proposed in
2007. Its rate of convergence is O(1/k^2), which is proved to
be optimal in this class of optimizers. The 1983 version optimizes
a smooth function with Lipschitz continuous gradient. The 2005
version extends to the primal-dual setting, and the 2007 version can
automatically estimate the unknown Lipschitz constant of the gradient.
This code is built upon the package BMRM.
|3||Hyperparameter Learning for Graph based Semi-supervised Learning Algorithms|
|[tar 100 KB] [paper]|
|This package implements the leave-one-out method for
learning the hyperparameters in graph based semi-supervised learning.
Practical efficiency is achieved via the Sherman–Morrison formula and by
facting out the common terms in feature weight updates.
This code relies on the math library of Matlab. See this link for details.
I am collecting some tricks in coding for machine learning. Coming soon.
I am also polishing some code for massaging datasets. Mostly written in C++ for handling large datasets.