MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Deep Neural Network Model of Hearing-Impaired Speech-in-Noise Performance

Author(s)
Haro, Stephanie; Smalt, Christopher J.; Ciccarelli, Gregory A.; Quatieri, Thomas F.
Thumbnail
Downloadfnins-14-588448.pdf (1011.Kb)
Publisher with Creative Commons License

Publisher with Creative Commons License

Creative Commons Attribution

Terms of use
Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/
Metadata
Show full item record
Abstract
Many individuals struggle to understand speech in listening scenarios that includereverberation and background noise. An individual’s ability to understand speech arisesfrom a combination of peripheral auditory function, central auditory function, and generalcognitive abilities. The interaction of these factors complicates the prescription oftreatment or therapy to improve hearing function. Damage tothe auditory peripherycan be studied in animals; however, this method alone is not enough to understandthe impact of hearing loss on speech perception. Computational auditory models bridgethe gap between animal studies and human speech perception.Perturbations to themodeled auditory systems can permit mechanism-based investigations into observedhuman behavior. In this study, we propose a computational model that accounts forthe complex interactions between different hearing damagemechanisms and simulateshuman speech-in-noise perception. The model performs a digit classification task asa human would, with only acoustic sound pressure as input. Thus, we can use themodel’s performance as a proxy for human performance. This two-stage model consistsof a biophysical cochlear-nerve spike generator followed by a deep neural network(DNN) classifier. We hypothesize that sudden damage to the periphery affects speechperception and that central nervous system adaptation overtime may compensatefor peripheral hearing damage. Our model achieved human-like performance acrosssignal-to-noise ratios (SNRs) under normal-hearing (NH) cochlear settings, achieving50% digit recognition accuracy at−20.7 dB SNR. Results were comparable to eightNH participants on the same task who achieved 50% behavioralperformance at−22dB SNR. We also simulated medial olivocochlear reflex (MOCR)and auditory nervefiber (ANF) loss, which worsened digit-recognition accuracy at lower SNRs comparedto higher SNRs. Our simulated performance following ANF loss is consistent withthe hypothesis that cochlear synaptopathy impacts communication in backgroundnoise more so than in quiet. Following the insult of various cochlear degradations, weimplemented extreme and conservative adaptation through the DNN. At the lowest SNRs(<0 dB), both adapted models were unable to fully recover NH performance, even withhundreds of thousands of training samples. This implies a limit on performance recoveryfollowing peripheral damage in our human-inspired DNN architecture.
Date issued
2020-12
URI
https://hdl.handle.net/1721.1/129383
Department
Lincoln Laboratory
Journal
Frontiers in neuroscience
Publisher
Frontiers Media SA
Citation
Haro, Stephanie et al. “Deep Neural Network Model of Hearing-Impaired Speech-in-Noise Performance.” Frontiers in neuroscience, 14 (December 2020): 588448 © 2020 The Author(s)
Version: Final published version
ISSN
2381-2710

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.