This workshop aims at promoting discussions among researchers investigating innovative tensor-based approaches to computer vision problems. Tensors have been a crucial mathematical object for several applications in computer vision and machine learning. It has been an essential ingredient in modelling latent semantic spaces, higher-order data factorization, and modelling higher-order information in visual data, and has found numerous applications in several hot topics in computer vision including, but not limited to human action recognition, object recognition, and video understanding. Moreover, tensor-based algorithms are increasingly finding significant applications in deep learning. With the rise of big data, tensors may yet prove crucial in both understanding deep architectures, as well as, may aid robust learning and generalization in inference algorithms.


We encourage discussions on recent advances, ongoing developments, and novel applications of multi-linear algebra, optimization, and feature representations using tensors. We are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:


Below is the program of the workshop on the 26th of July, 2017. Bear in mind there may be some last minute reshuffling of the schedule due to any unforseen clashes in timetables of our speakers etc. Please check Detailed Program below for the abstracts and biographies of our invited speakers (or click on links in tables). The invited talks are scheduled to be 25 minutes long plus 5 minutes for questions. The oral paper presentations are scheduled to be 7 minutes long plus 3 minutes for questions. We strongly recommend that the student speakers practise their talk ahead to make sure they do not exceed the limit of 7 minutes.

Morning Session

Time Invited Speaker Title
09:00 Prof. Fatih Porikli Welcome
09:05 Prof. Animashree Anandkumar Role of Tensors in Deep Learning
09:35 Dr. Andrzej Cichocki Tensor Networks for Deep Learning
10:05 Morning break Kamehameha II
10:30 Dr. Nadav Cohen Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions
11:00 Prof. René Vidal Globally Optimal Structured Low-Rank Matrix and Tensor Factorization
11:30 Dr. Ivan Oseledets Deep Learning and Tensors for the Approximation of Multivariate Functions: Recent Results and Open Problems
12:00 Lunch break Kamehameha II

Oral Papers

Time Speaker Title
13:30 Liuqing Yang, Evangelos E. Papalexakis Exploration of Social and Web Image Search Results Using Tensor Decomposition
13:40 Mihir Paradkar, Madeleine Udell Graph-Regularized Generalized Low-Rank Models
13:50 Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, William J. Dally Exploring the Granularity of Sparsity in Convolutional Neural Networks
14:00 Chan-Su Lee Human Action Recognition Using Tensor Dynamical System Modeling
14:10 Jean Kossaifi, Aran Khanna, Zachary Lipton, Tommaso Furlanello, Anima Anandkumar Tensor Contraction Layers for Parsimonious Deep Nets

Afternoon Session

Time Invited Speaker Title
14:20 Prof. Richard Hartley Learning Methods and Optimization on Matrix Manifolds and Matrix Lie Groups
14:50 Dr. M. Alex O. Vasilescu You've got Data, We've Got Tensors: Linear and Multilinear Tensor Models for Computer Vision, Graphics and Machine Learning
15:20 Afternoon break Kamehameha II
15:50 Otto Debals and Dr. Lieven De Lathauwer Numerical Optimization Algorithm for Tensor-based Recognition
16:20 Dr. Lior Horesh A New Tensor Algebra - Theory and Applications
16:50 Prof. Luc Florack Redeeming the Clinical Promise of Diffusion MRI in Support of the Neurosurgical Workflow
17:20 Closing remarks



Below is the list of speakers (in alphabetical order) who have kindly agreed to give talks during the workshop: