Yi Li
 

Researcher, Computer Vision Group, NICTA and ANU

Canberra, MD ACT 2601

yi.li AT cecs DOT anu DOT edu DOT au

Home
Curriculum vitae
Publications
Current research

Past research  

Beyond research

 


 

Education 

·  Aug 2004 – present

      Computer Vision Lab, University of Maryland, College Park

      Advised by Prof. Yiannis Aloimonos and Dr. Cornelia Fermuller

·  Aug 2001 - Oct 2004
      
M.Eng in Computer Science, (First rank)

·  Aug 1998 - Oct 2001
B.Eng in Computer Science, (First Honor)

      South China University of Technology, Guangzhou

 
Research Interests 

·   Human Movement Analysis

     ·  Cognitive Robotics;

     ·  Assessing Human Action for Disease Diagnosis

     ·  Social Signal Processing for Social Intelligence;

·   Computer Vision and Machine Learning

     ·  Visual Perception and Optical Illusions;

     ·  Action and Object Recognition;

     ·  Sparseness Recovery;


Awards and Honors 

·    Jan 2008,  Future Faculty Fellow, A. James Clark School of Engineering, University of Maryland  (A fellowship program for preparing gifted graduate students for faculty positions at engineering schools.)

·    2nd place, 1st Semantic Robot Vision Challenge (sponsored by  NSF), AAAI2007, Vancouver, Canada, 2007.

·    Best student paper, 10th International Conference on Frontiers in Handwriting Recognition, 2006.

 

 

 

 

I. Current Research

 

 

A. Human Movement Analysis

 

Human movement is a window into nervous system functions. One of my goals is to work towards measuring and interpreting human motion capture (MoCap) data for improving the functionalities that facilitate the robot-human interaction in a social context, and developing tools for disease diagnosis and rehabilitation related to aging and various disorders or conditions that exhibit themselves through movement.

 

A.1. Movement Synergies for Cognitive Robots

 

We decompose motion capture (MoCap) sequences into synergies (smooth and short basis functions) along with the times at which they are “activated” for each joint.

 

The result will provide the effective building blocks for robotics research, such as generating natural humanoid body movements. (read more...)

 

A.2. Assessing Human Health using Movement Synergies

 

My research focuses on measuring human action, understanding its coordination characteristic, and further developing optimal diagnostic and intervention tools for populations with atypical movement patterns.

 

I have worked on Parkinson’s disease, and in the future, I plan to work on early diagnosis of developmental diseases on the basis of movement (e.g., Autism).

(read more...)

 

 

A.3. Coordinated Actions

Coordination has gained more attention than others in cognitive studies because all species have complex coordination behaviors for defensive, reproductive or hunting.

Suggested by Prof. Fadiga and Dr.Alessandro D'Ausilio, we proposed to use Granger Causality as a tool to study the coordinated actions performed by at least two units. If one action causes the other, then knowledge (history) of the first action should help predict future values of the latter.

We successfully applied Granger Causality to the kinematic data in a chamber orchestra to test the interaction among players and between conductors and players.

(joint work with Prof. Fadiga and Dr.Alessandro D'Ausilio , read more...)

 

B. Computer Vision and Machine Learning for Cognitive Systems

A cognitive vision system is the embodiment of a series of principles that views cognition as the starting point of any computational vision algorithm. As said by Aristotle, humans are social animals. Thus, cognitive vision systems should be suited for assisting social interaction. I am working on providing tools for analyzing human actions in visual space and motoric space and  semantic object recognition. In parallel, I am interested in active perception and early vision.

 

B.1. Visual Illusions for Understanding Cognitive Vision Systems

The gray squares have the same intensity value. Why do we think they are different? What is the underlying mathematics which tells us how to actively sample and reconstruct a real-world scene? I proposed to use the new theory of compressive sensing as the model.  The reconstruction error can explain many well-known lightness illusions, e.g., the dilemma between contrast and assimilation. (read more...)

B.2. Action Key Pose Extraction for Human-Robot-Interaction

 

Not all poses are created equal. We model the key poses as the discontinuities in the second order derivatives of the latent variables in the reduced visual space which we obtain using the Gaussian Process Dynamical Models (GPDM). Experiments demonstrate that the extracted key poses facilitate the human action analysis and improve the action recognition rate significantly by reducing the uncharacteristic poses.(read more...)

B.3. Semantic Object Recognition for Intelligent Service Robots

Intelligent service robots must search, identify and further interact with objects just from their names like humans do. We implement a prototype system in the Semantic Robot Vision Challenge on a mobile platform that parses the large amount of the semantic information available online, automatically searches image examples of those objects and learns visual models, and actively segments and locates the objects using a quad camera system on a pan-tilt-unit in a previous unknown environment. (read more...)

 

 

 

 

Last updated: Jan/1/2010

Locations of visitors to this page