Modeling Empathic Similarity in Personal Narratives
Author(s)
Shen, Jocelyn
DownloadThesis PDF (4.780Mb)
Advisor
Breazeal, Cynthia
Terms of use
Metadata
Show full item recordAbstract
The most meaningful connections between people are often formed through expression of shared vulnerability and emotional experiences. Despite the number of ways in which we are able to connect through technology-mediated platforms today, loneliness, apathy, and mental distress are still pervasive around the world. In this thesis, we aim to use NLP systems to humanize personal experiences through identifying similarity in personal narratives based on empathic resonance as opposed to raw semantic or lexical similarity.
We present a novel task for the retrieval of empathically similar stories, as well as the first evaluation benchmark on this task. We operationalize empathic similarity in personal stories using insights from social psychology and narratology, and introduce EmpathicStories, a crowdsourced dataset of emotional personal experiences annotated with features based on our framework and empathic similarity scores between pairs of stories. From our dataset, we provide insights into what features contribute to emotionally resonant stories.
We then compare prompting and fine-tuning large language models (LLMs) for empathic similarity understanding and empathy reasoning summarization. Our experiments show that our model fine-tuned on EmpathicStories achieves performance boosts across both similarity metrics and retrieval metrics compared to state-of-the-art baselines. We additionally conduct a human evaluation to assess the effect our model has on retrieving stories that users empathize with, and compare its performance against naive semantic similarity-based retrieval and ChatGPT generated stories. We find that participants empathized significantly more with stories retrieved by our model than standard, off-the-shelf sentence transformer retrieval. In addition, our user studies show that participants expressed they would empathize much less with AI-written stories than human written stories. Our work sheds light on how LLMs can be used to reason about the interplay of emotions between narrators and can have strong implications for a wide range of other recommendation, generation, and dialogue tasks. In doing so, we demonstrate the potential for social-emotional reasoning in NLP systems to foster prosociality, human connection, and empathy between people.
Date issued
2023-06Department
Program in Media Arts and Sciences (Massachusetts Institute of Technology)Publisher
Massachusetts Institute of Technology