Program
|
The papers and presentations are now available in
PDF format! See below.
|
The Program Schedule
is now available! (PDF)
The workshop will take place over two and a half days, and provide
ample time to discuss recent ideas and results as well as time for
networking. The workshop will start at lunch time on Wednesday, 1
November, and finish in the afternoon of Froday, 3 November, thus
allowing sufficient time for participants to travel to and from
Canberra.
The workshop will be divided into several sessions that will run
in a single-track fashion. The sessions will include a mix of
keynote presentations, presentations of accepted papers, and
discussion time for future possible collaborations. The program
includes keynote
addresses by Iain
Matthews (Carnegie-Mellon University, Pittsburgh), Gerasimos
Potamianos (IBM TJ Watson Research Center, Yorktown), and David
Powers (Flinders University, Adelaide). We hope that
participants will find the discussion and networking time
particularly useful and we would be glad to set time aside for
participants who want to discuss joint ARC grant applications.
Papers and abstracts are published in:
In R. Goecke, A. Robles-Kelly, and T. Caelli (Eds.),
Proceedings of the HCSNet Workshop on the Use of Vision in
Human-Computer Interaction (VisHCI 2006), Canberra, Australia,
CRPIT, Vol. 56, ACS.
Accepted Full Papers (Peer Reviewed)
- Face Refinement through a Gradient Descent Alignment
Approach
Simon Lucey, Iain Matthews
Paper
(PDF, pp.43-49)
- Patch-Based Representation of Visual Speech
Patrick Lucey, Sridha Sridharan
Paper
(PDF, pp.79-85),
Presentation (PDF)
- Vowel recognition of English and German language using
Facial movement (SEMG) for Speech control based HCI
Sridhar Poosapadi Arjunan, Hans Weghorn, Dinesh Kant
Kumar, Wai Chee Yau
Paper
(PDF, pp.13-18),
Presentation (PDF)
- Audio-Visual Speaker Verification using Continuous Fused
HMMs
David Dean, Sridha Sridharan, Tim Wark
Paper
(PDF, pp.87-92),
Presentation (PDF)
- Using Optical Flow for Step Size Initialisation in Hand
Tracking by Stochastic Optimisation
Desmond Chik
Paper
(PDF, pp.61-66),
Presentation (PDF)
- Voiceless Speech Recognition Using Dynamic Visual Speech
Features
Wai Chee Yau, Dinesh Kant Kumar, Sridhar Poosapadi
Arjunan
Paper
(PDF, pp.93-101),
Presentation (PDF)
- Image-Based Multi-view Scene Analysis using 'Conexels'
Josep R. Casas, Jordi Salvador
Paper
(PDF, pp.19-28),
Presentation (PDF)
- Hand gestures for HCI using ICA of EMG
Ganesh Naik, Dinesh Kant Kumar, Vijay Pal Singh,
Marimuthu Palaniswami
Paper
(PDF, pp.67-72),
Presentation (PDF)
- Observer Annotation of Affective Display and Evaluation
of Expressivity: Face vs. Face-and-Body
Hatice Gunes, Massimo Piccardi
Paper
(PDF, pp.35-42),
Presentation (PDF)
- Nuisance Free Recognition of Hand Postures Over a
Tabletop Display
João Carreira, Paulo Peixoto
Paper
(PDF, pp.73-78),
Presentation (PDF)
- Learning Active Appearance Models from Image Sequences
Jason Saragih, Roland Goecke
Paper
(PDF, pp.51-60),
Presentation (PDF)
- Image Feature Evaluation for Contents-based Image
Retrieval
Adam Kuffner, Antonio Robles-Kelly
Paper
(PDF, pp.29-33)
The acceptance rate was 55%.
Abstracts
- Fast and Accurate Face Tracking Using AAMs
(Keynote)
Iain Matthews
Paper
(PDF, p.3)
- Audio-Visual Technologies for Lecture and Meeting
Analysis inside Smart Rooms (Keynote)
Gerasimos Potamianos
Paper
(PDF, p.7),
Presentation (PDF)
- Audio-Visual Speech Processing: Progress and
Challenges (Keynote)
Gerasimos Potamianos
Paper
(PDF, p.5),
Presentation (PDF)
- Vision in HCI: Embodiment, Multimodality and Information
Capacity (Keynote)
David Powers
Paper
(PDF, pp.9-10)
- Video to the Rescue
Girija Chetty, Michael Wagner
Paper
(PDF, pp.107-108),
Presentation (PDF)
- Emotions in HCI - An Affective E-Learning System
Robin Kaiser, Karina Oertel
Paper
(PDF, pp.105-106),
Presentation (PDF)
Demonstration Sessions
On Thursday, 2 November 2006, Seeing
Machines will give a demonstration of their award-winning
faceLab™ software which provies a robust and flexible
non-contact vision-based system for tracking and acquisition of
human facial features. On Friday, 3 November 2006, workshop
participants will have the opportunity to go on a tour of the
Seeing Machines facilities and to watch demonstrations of their
other products and innovations.
In addition, the VISTA
program of the NICTA Canberra Research Lab
will demonstrate camera technology that can 'see' beyond the
visible spectrum (near-infrared and far-infrared cameras) as part
of the Spectral Imaging and Source Mapping project
and driver assistance technology that can detect road signs and
track pedestrians crossing the street as part of the
Smarts Cars
project.