Show simple item record

dc.contributor.authorSuguitan, Michael
dc.contributor.authorDePalma, Nicholas
dc.contributor.authorHoffman, Guy
dc.contributor.authorHodgins, Jessica
dc.date.accessioned2023-11-06T19:03:10Z
dc.date.available2023-11-06T19:03:10Z
dc.identifier.issn2573-9522
dc.identifier.urihttps://hdl.handle.net/1721.1/152915
dc.description.abstractIn this work, we present a method for personalizing human-robot interaction by using emotive facial expressions to generate affective robot movements. Movement is an important medium for robots to communicate affective states, but the expertise and time required to craft new robot movements promotes a reliance on fixed preprogrammed behaviors. Enabling robots to respond to multimodal user input with newly generated movements could stave off staleness of interaction and convey a deeper degree of affective understanding than current retrieval-based methods. We use autoencoder neural networks to compress robot movement data and facial expression images into a shared latent embedding space. Then, we use a reconstruction loss to generate movements from these embeddings and triplet loss to align the embeddings by emotion classes rather than data modality. To subjectively evaluate our method, we conducted a user survey and found that generated happy and sad movements could be matched to their source face images. However, angry movements were most often mismatched to sad images. This multimodal data-driven generative method can expand an interactive agent's behavior library and could be adopted for other multimodal affective applications.en_US
dc.publisherACMen_US
dc.relation.isversionofhttps://doi.org/10.1145/3623386en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleFace2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networksen_US
dc.typeArticleen_US
dc.identifier.citationSuguitan, Michael, DePalma, Nicholas, Hoffman, Guy and Hodgins, Jessica. "Face2Gesture: Translating Facial Expressions Into Robot Movements Through Shared Latent Space Neural Networks." ACM Transactions on Human-Robot Interaction.
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.relation.journalACM Transactions on Human-Robot Interactionen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2023-11-01T07:58:12Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2023-11-01T07:58:12Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record