MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interfaces and Models for Improved Understanding of Real-World Communicative and Affective Nonverbal Vocalizations by Minimally Speaking Individuals

Author(s)
Narain, Jaya
Thumbnail
DownloadThesis PDF (5.148Mb)
Advisor
Maes, Pattie
Terms of use
In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
This work focuses on a sub-group (denoted by mv*) of non- and minimally speaking individuals who have fewer than 10 words or word approximations and limited expressive language through speech and writing. In the United States alone, this group comprises over one million individuals. Their nonverbal vocalizations (i.e., vocalizations that do not have typical verbal content) often have selfconsistent phonetic content and vary in tone, pitch, and duration depending on the individual’s emotional state or intended communication. While these vocalizations contain important affective and communicative information and are understood by close family and friends, they are often poorly understood by those who don't know the communicator well. Improved understanding of these nonverbal vocalizations could contribute to the development of technology to augment communication. This thesis aims to help the community at-large better understand and communicate with mv* individuals by utilizing families’ unique understanding of nonverbal vocalizations. For this work, families provided personalized labels for vocalizations, which were then used to compile a novel dataset and train machine learning models. The thesis contributes (1) the design and evaluation of a novel data collection protocol for real-world audio with personalized in-themoment labels, (2) a new dataset, ReCANVo, of over 7,000 nonverbal vocalizations from eight mv* communicators, collected longitudinally in real-world settings, (3) machine learning evaluation strategies and algorithms suitable for messy, real-world data that can classify vocalizations from mv* individuals with F1-scores above chance, and (4) the design of a novel communication interface, based on these interviews, surveys, and data analyses. The presented dataset ReCANVo is the only dataset of nonverbal vocalizations from mv* individuals, the largest dataset of nonverbal vocalizations, and one of the first datasets capturing real-world emotions across settings. The presented data analyses show, for the first time, that it is possible for models to classify nonverbal vocalizations by mv* individuals by function using audio alone. While this work was motivated by impact for a small, specialized population, the results can inform the design of real-world data collection and modeling approaches more broadly.
Date issued
2021-09
URI
https://hdl.handle.net/1721.1/140101
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.