Show simple item record

dc.contributor.authorScheirer, Jocelyn
dc.contributor.authorPicard, Rosalind
dc.contributor.authorCantrell, Aubrey
dc.date.accessioned2025-12-03T16:51:41Z
dc.date.available2025-12-03T16:51:41Z
dc.date.issued2025-10-26
dc.identifier.isbn979-8-4007-2052-9
dc.identifier.urihttps://hdl.handle.net/1721.1/164172
dc.descriptionMRAC '25, October 27–28, 2025, Dublin, Irelanden_US
dc.description.abstractBiofeedback interfaces traditionally rely on abstract visualizations, tones, or haptics to convey physiological states—but these often lack personal relevance, emotional salience, and engagement. In this paper, we present a novel system that bridges wearable sensing and generative AI to create real-time, personalized animated biofeedback experiences. Users describe emotionally meaningful objects or scenes to a language model in our system, which outputs generate customized Processing animations. These animations are then dynamically driven by electrodermal activity (EDA) signals from a wrist sensor. We co-design and evaluate the system with autistic adults, many of whom have unique “special interests” that are likely to engage them more than a one-sized-fits-all visualization. Many of these individuals also have difficulty with interoception -- feeling or sensing their own internal and physiological state changes. We built this tool to transform passive physiological monitoring into an interactive multimedia experience, where the visual representation of the body is authored by the user. We introduce a prompt-engineered GPT-based interface that streamlines code generation, sensor mapping, and iterative refinement, requiring no prior coding expertise. The technical pipeline we built includes signal filtering, dynamic parameter mapping, and natural language-based customization— delivering a real-time, visually immersive feedback loop. We report on initial case studies with 12 autistic adults using the system, which highlight both the expressive potential and individual variability of user responses, reinforcing the need for adaptable multimedia frameworks in health technologies. By merging realtime physiological data with generative animation and natural language interaction, this work expands the creative frontier of personalized affective biofeedback. We also address ethical challenges arising from using AI with physiological sensors.en_US
dc.publisherACM|Proceedings of the 3rd International Workshop on Multimodal and Responsible Affective Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3746270.3760237en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titlePersonalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductanceen_US
dc.typeArticleen_US
dc.identifier.citationJocelyn Scheirer, Rosalind Picard, and Aubrey Cantrell. 2025. Personalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductance. In Proceedings of the 3rd International Workshop on Multimodal and Responsible Affective Computing (MRAC '25). Association for Computing Machinery, New York, NY, USA, 146–151.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratoryen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-11-01T07:52:46Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-11-01T07:52:46Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record