Show simple item record

dc.contributor.authorSong, Yale
dc.contributor.authorMorency, Louis-Philippe
dc.contributor.authorDavis, Randall
dc.date.accessioned2014-04-11T14:20:52Z
dc.date.available2014-04-11T14:20:52Z
dc.date.issued2012-10
dc.identifier.isbn9781450314671
dc.identifier.urihttp://hdl.handle.net/1721.1/86099
dc.description.abstractMultimodal human behavior analysis is a challenging task due to the presence of complex nonlinear correlations and interactions across modalities. We present a novel approach to this problem based on Kernel Canonical Correlation Analysis (KCCA) and Multi-view Hidden Conditional Random Fields (MV-HCRF). Our approach uses a nonlinear kernel to map multimodal data to a high-dimensional feature space and finds a new projection of the data that maximizes the correlation across modalities. We use a multi-chain structured graphical model with disjoint sets of latent variables, one set per modality, to jointly learn both view-shared and view-specific sub-structures of the projected data, capturing interaction across modalities explicitly. We evaluate our approach on a task of agreement and disagreement recognition from nonverbal audio-visual cues using the Canal 9 dataset. Experimental results show that KCCA makes capturing nonlinear hidden dynamics easier and MV-HCRF helps learning interaction across modalities.en_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grant N000140910625)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1118018)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1018055)en_US
dc.description.sponsorshipUnited States. Army Research, Development, and Engineering Commanden_US
dc.language.isoen_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2388676.2388684en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleMultimodal human behavior analysis: Learning correlation and interaction across modalitiesen_US
dc.typeArticleen_US
dc.identifier.citationYale Song, Louis-Philippe Morency, and Randall Davis. 2012. Multimodal human behavior analysis: learning correlation and interaction across modalities. In Proceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12). ACM, New York, NY, USA, 27-30.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorSong, Yaleen_US
dc.contributor.mitauthorDavis, Randallen_US
dc.relation.journalProceedings of the 14th ACM international conference on Multimodal interaction (ICMI '12)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsSong, Yale; Morency, Louis-Philippe; Davis, Randallen_US
dc.identifier.orcidhttps://orcid.org/0000-0001-5232-7281
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record