MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Unsupervised Phonetic Category Learning from Audio and Visual Input

Author(s)
Zhi, Sophia
Thumbnail
DownloadThesis PDF (2.938Mb)
Advisor
Levy, Roger
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Understanding how children learn the phonetic categories of their native language is an open area of research in cognitive science and child language development. However, despite experimental evidence that phonetic processing is very often a multimodal phenomenon (involving both auditory and visual cues), computational research has primarily modeled phonetic category learning as a function of only auditory input. In this thesis, I investigate whether multimodal information benefits phonetic category learning under a clustering model. Due to the lack of an appropriate dataset, I also introduce a method for creating a high-quality dataset of synthetic videos of speakers’ faces for an existing audio corpus. This model trained and tested on audiovisual data achieves up to a 9.1% improvement on a phoneme discrimination battery over the random baseline compared to a model trained and tested on only audio data. The audiovisual model also outperforms the audio model by up to 4.7% over the baseline when both are tested on audio-only data, suggesting that visual information guides the learner towards better clusters. Further analysis indicates that visual information benefits most, but not all, phonemic contrasts. In follow-up analyses, I investigate the learned audiovisual clusters and their relationship to auditory gestures and phones, finding that the clusters capture a unit of speech smaller than phonemes. This work demonstrates the benefit of visual information to a computational model of phonetic category learning, suggesting that children may benefit substantively by using visual cues while learning phonetic categories.
Date issued
2023-06
URI
https://hdl.handle.net/1721.1/151659
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.