Audio-Video Automatic Speech Recognition: An Example of Improved
Performance through Multimodal Sensor Input
Author: Roland Göcke
Presented by Roland Göcke at the NICTA-HCSNet Multimodal User
Interaction Workshop (MMUI2005), Sydney, Australia, 13-14 September
2005
Abstract
One of the advantages of multimodal HCI technology is the performance
improvement that can be gained over conventional single-modality technology
by employing complementary sensors in different modalities. Such
information is particular useful in practical, real-world applications
where the application's performance must be robust against all kinds of
noise. An example is the domain of automatic speech recognition (ASR).
Traditionally, ASR systems only use acoustic information from the audio
modality. In the presence of acoustic noise, the performance drops quickly.
However, it can and has been shown that the incorporation of additional
visual speech information from the video modality improves the performance
significantly, so that AV ASR systems can be employed in applications areas
where audio-only ASR systems would fail, thus opening new application
areas for ASR technology. In this paper, a non-intrusive (no artificial
markers), real-time 3D lip tracking system is presented as well as its
application to AV ASR. The multivariate statistical analysis 'co-inertia'
analysis is also shown, which offers improved numerical stability over
other multivariate analyses even for small sample sizes.
Download (337kB, PDF)
[Homepage]
[Research]
[Publications]
(c) Roland Göcke
Last modified: Sun Oct 01 16:19:27 AUS Eastern Standard Time 2006