Show simple item record

dc.contributor.authorSong, Yale
dc.contributor.authorMorency, Louis-Philippe
dc.contributor.authorDavis, Randall
dc.date.accessioned2014-04-11T18:49:44Z
dc.date.available2014-04-11T18:49:44Z
dc.date.issued2013-12
dc.identifier.isbn9781450321297
dc.identifier.urihttp://hdl.handle.net/1721.1/86124
dc.description.abstractObtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.en_US
dc.description.sponsorshipUnited States. Office of Naval Research (N000140910625)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (IIS-1018055)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (IIS-1118018)en_US
dc.description.sponsorshipUnited States. Army Research, Development, and Engineering Commanden_US
dc.language.isoen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1145/2522848.2522851en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleLearning a sparse codebook of facial and body microexpressions for emotion recognitionen_US
dc.typeArticleen_US
dc.identifier.citationYale Song, Louis-Philippe Morency, and Randall Davis. 2013. Learning a sparse codebook of facial and body microexpressions for emotion recognition. In Proceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13). ACM, New York, NY, USA, 237-244.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorSong, Yaleen_US
dc.contributor.mitauthorDavis, Randallen_US
dc.relation.journalProceedings of the 15th ACM on International conference on multimodal interaction (ICMI '13)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsSong, Yale; Morency, Louis-Philippe; Davis, Randallen_US
dc.identifier.orcidhttps://orcid.org/0000-0001-5232-7281
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record