Show simple item record

dc.contributor.authorChen, H
dc.contributor.authorZhang, Y
dc.contributor.authorWeninger, F
dc.contributor.authorPicard, Rosalind W.
dc.contributor.authorBreazeal, C
dc.contributor.authorPark, HW
dc.date.accessioned2021-11-02T17:28:13Z
dc.date.available2021-11-02T17:28:13Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/137131
dc.description.abstract© 2020 Owner/Author. Automatic speech-based affect recognition of individuals in dyadic conversation is a challenging task, in part because of its heavy reliance on manual pre-processing. Traditional approaches frequently require hand-crafted speech features and segmentation of speaker turns. In this work, we design end-to-end deep learning methods to recognize each person's affective expression in an audio stream with two speakers, automatically discovering features and time regions relevant to the target speaker's affect. We integrate a local attention mechanism into the end-to-end architecture and compare the performance of three attention implementations - one mean pooling and two weighted pooling methods. Our results show that the proposed weighted-pooling attention solutions are able to learn to focus on the regions containing target speaker's affective information and successfully extract the individual's valence and arousal intensity. Here we introduce and use a "dyadic affect in multimodal interaction - parent to child"(DAMI-P2C) dataset collected in a study of 34 families, where a parent and a child (3-7 years old) engage in reading storybooks together. In contrast to existing public datasets for affect recognition, each instance for both speakers in the DAMI-P2C dataset is annotated for the perceived affect by three labelers. To encourage more research on the challenging task of multi-speaker affect sensing, we make the annotated DAMI-P2C dataset publicly available, including acoustic features of the dyads' raw audios, affect annotations, and a diverse set of developmental, social, and demographic profiles of each dyad.en_US
dc.language.isoen
dc.publisherACMen_US
dc.relation.isversionof10.1145/3382507.3418842en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceACMen_US
dc.titleDyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataseten_US
dc.typeArticleen_US
dc.identifier.citationChen, H, Zhang, Y, Weninger, F, Picard, R, Breazeal, C et al. 2020. "Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset." ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction.
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.relation.journalICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interactionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-06-24T16:24:10Z
dspace.orderedauthorsChen, H; Zhang, Y; Weninger, F; Picard, R; Breazeal, C; Park, HWen_US
dspace.date.submission2021-06-24T16:24:12Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record