1. Generalized mixability via entropic duality

  2. Mark Reid, Rafael Frongillo, Robert Williamson, and Nishant Mehta

  3. COLT 2015

  4. From stochastic mixability to fast rates

  5. Nishant Mehta and Robert Williamson

  6. NIPS 2014. Selected for oral presentation.

  7.   Long version: From stochastic mixability to fast rates

  8. arXiv 1406.3781, 2014

  9. Sparsity-based generalization bounds for predictive sparse coding

  10. Nishant Mehta and Alexander Gray

  11. ICML 2013

  12.   Long version: On the sample complexity of predictive sparse coding

  13. arXiv 1202.4050, 2012

  14. MLPACK: A scalable C++ machine learning library

  15. Ryan Curtin, James Cline, N.P. Slagle, William March, Parikshit Ram, Nishant Mehta, and Alexander Gray.

  16. JMLR 2013

  17. Minimax multi-task learning and a generalized loss-compositional paradigm for MTL

  18. Nishant Mehta, Dongryeol Lee, and Alexander Gray

  19. NIPS 2012

  20.   Shorter, workshop version: Minimax multi-task learning

  21. NIPS 2012 Workshop on Multi-Trade-offs in Machine Learning

  22. Computer detection approaches for the identification of phasic electromyographic (EMG) activity during human sleep

  23. Jacqueline Fairley, George Georgoulas, Nishant Mehta, Alexander Gray, and Donald Bliwise

  24. Biomedical Signal Processing and Control, 2012

  25. Discriminative sparse coding for classification and regression

  26. Nishant Mehta and Alexander Gray

  27. The Learning Workshop (Snowbird), 2011. Oral presentation

  28. Generative and latent mean map kernels

  29. Nishant Mehta and Alexander Gray

  30. arXiv 1005.0188, 2010

  31. Recognizing sign language from brain imaging

  32. Nishant Mehta, Thad Starner, Melody Moore Jackson, Karolyn Babalola, and George Andrew James

  33. International Conference on Pattern Recognition (ICPR), 2010

  34.   Longer, earlier version: Recognizing sign language from brain imaging

  35. GVU Technical Report GIT-GVU-09-06, 2009

  36. Optimal control strategies for an SSVEP-based brain-computer interface

  37. Nishant Mehta, Sadhir Hussain, and Melody Moore Jackson

  38. International Journal of Human-Computer Interaction, 2010

  39. FuncICA for time series pattern discovery

  40. Nishant Mehta and Alexander Gray

  41. SIAM Data Mining, 2009. Nominated for best paper award. Selected for oral presentation

  42. Estimating neural signal dependence using kernels

  43. Nishant Mehta, Alexander Gray, Thad Starner, and Melody Moore Jackson

  44. NIPS 2008 Workshop on Statistical Analysis and Modeling of Response Dependencies in Neural Populations

In July I’ll be starting as a postdoc at CWI working with Peter Grünwald. Currently, I’m a Research Fellow (i.e. postdoc) at Australian National University’s Research School of Computer Science. Before that, I did my PhD in Computer Science, studying machine learning in Georgia Tech’s College of Computing under my thesis advisor Alexander Gray.

I work on problems in online learning and statistical learning theory. I’m most intrigued by fundamental concepts that form a bridge between these areas and clarify our understanding of when learning is easy and when it is hard. I’m also interested in information geometry, minimax lower bounds, and theoretical foundations for representation learning.

During grad school my research focused on sparse representations, multi-task learning, and learning latent representations using supervised information, with an emphasis on learning theoretic results. My dissertation, “On sparse representations and new meta-learning paradigms for representation learning,” established generalization error bounds for learning sparse representations for supervised tasks and also introduced new multi-task learning and meta-learning frameworks.



I have been a teaching assistant for

  1. Advanced Machine Learning (Alexander Gray’s “Computational Data Analysis”)

  2. Machine Learning (Guy Lebanon’s “Computational Data Analysis)

  3. Discrete Algorithms (Alberto Apostolico’s “Computational Science & Engineering Algorithms”)

Nishant Mehta

n|i|s|h|a|n|t.m|e|h|t|a ατ a|n|u.e|d|u.a|u

7 London Circuit

Canberra, ACT 2601



Relevant Coursework

Machine Learning and Statistics

  1. Selected topics in high dimensional probability and statistics,

  2. Statistical estimation, Machine learning theory, Probabilistic graphical models,

  3. Natural language understanding, Combinatorial methods in density estimation,

  4. Introduction to information theory, Neural coding, Spatial statistics


  1. Design and analysis of algorithms, Spectral algorithms, Bioinformatics algorithms (String algorithms)


  1. Introduction to geometry and topology I and II, Introduction to graph theory


  1. Linear optimization, Nonlinear optimization