Multi-view latent variable discriminative models for action recognition
Author(s)
Song, Yale; Davis, Randall; Morency, Louis-Philippe
DownloadDavis_Multi-view.pdf (2.616Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Many human action recognition tasks involve data that can be factorized into multiple views such as body postures and hand shapes. These views often interact with each other over time, providing important cues to understanding the action. We present multi-view latent variable discriminative models that jointly learn both view-shared and view-specific sub-structures to capture the interaction between views. Knowledge about the underlying structure of the data is formulated as a multi-chain structured latent conditional model, explicitly learning the interaction between multiple views using disjoint sets of hidden variables in a discriminative manner. The chains are tied using a predetermined topology that repeats over time. We present three topologies - linked, coupled, and linked-coupled - that differ in the type of interaction between views that they model. We evaluate our approach on both segmented and unsegmented human action recognition tasks, using the ArmGesture, the NATOPS, and the ArmGesture-Continuous data. Experimental results show that our approach outperforms previous state-of-the-art action recognition models.
Date issued
2012-06Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Y. Song, L. Morency, and R. Davis. “Multi-View Latent Variable Discriminative Models for Action Recognition.” 2012 IEEE Conference on Computer Vision and Pattern Recognition (n.d.). doi:10.1109/cvpr.2012.6247918.
Version: Author's final manuscript
ISBN
978-1-4673-1228-8
978-1-4673-1226-4
978-1-4673-1227-1