AMAI: Adaptive music for affect improvement
Author(s)
Su, David; Picard, Rosalind W.; Liu, Yan
DownloadPublished version (580.2Kb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Copyright: © 2018 David Su et al. This is an open-access article distributed under the terms of the Creative_Commons_Attribution_License_3.0 Unported This paper introduces Adaptive Music for Affect Improvement (AMAI), a music generation and playback system whose goal is to steer the listener towards a state of more positive affect. AMAI utilizes techniques from game music in order to adjust elements of the music being heard; such adjustments are made adaptively in response to the valence levels of the listener as measured via facial expression and emotion detection. A user study involving AMAI was conducted, with N=19 participants across three groups, one for each strategy of Discharge, Diversion, and Discharge → Diversion. Significant differences in valence levels between music-related stages of the study were found between the three groups, with Discharge → Diversion exhibiting the greatest increase in valence, followed by Diversion and finally Discharge. Significant differences in positive affect between groups were also found in one before-music and after-music pair of self-reported affect surveys, with Discharge → Diversion exhibiting the greatest decrease in positive affect, followed by Diversion and finally Discharge; the resulting differences in facial expression valence and self-reported affect offer contrasting conclusions.
Date issued
2018Department
Massachusetts Institute of Technology. Media Laboratory; Program in Media Arts and Sciences (Massachusetts Institute of Technology)Citation
Picard, Rosalind, Su, David and Liu, Yan. 2018. "AMAI: Adaptive music for affect improvement."
Version: Final published version