Learning Active Appearance Models from Image Sequences
Authors: Jason Saragih and Roland Göcke
Presented by Jason Saragih at the HCSNet Workshop on the Use of
Vision in HCI VisHCI2006, Canberra, Australia, 1-3 November 2006
Abstract
One of the major drawbacks of the Active Appearance Model (AAM) is
that it requires a training set of pseudo-dense correspondences. Most
methods for automatic correspondence finding involve a groupwise
model building process which optimises over all images in the
training sequence simultaneously. In this work, we pose the problem
of correspondence finding as an adaptive template tracking process.
We investigate the utility of this approach on an audio-visual
(AV) speech database and show that it can give reasonable results.
AUTHOR = {J. Saragih and R. Goecke},
TITLE = {{Learning Active Appearance Models from Image Sequences}},
BOOKTITLE = {{Proceedings of the HCSNet Workshop on the Use of Vision in HCI VisHCI2006}},
PUBLISHER = {ACS},
ADDRESS = {Canberra, Australia},
SERIES = {Conferences in Research and Practice in Information Technology},
VOLUME = 56,
PAGES = {51--60},
MONTH = nov,
YEAR = 2006}