Show simple item record

dc.contributor.authorStephenson, Cory
dc.contributor.authorFeather, Jenelle
dc.contributor.authorPadhy, Suchismita
dc.contributor.authorElibol, Oguz
dc.contributor.authorTang, Hanlin
dc.contributor.authorMcDermott, Josh
dc.contributor.authorChung, SueYeon
dc.date.accessioned2021-11-05T14:17:08Z
dc.date.available2021-11-05T14:17:08Z
dc.date.issued2019
dc.identifier.urihttps://hdl.handle.net/1721.1/137472
dc.description.abstract© 2019 Neural information processing systems foundation. All rights reserved. Encouraged by the success of deep neural networks on a variety of visual tasks, much theoretical and experimental work has been aimed at understanding and interpreting how vision networks operate. Meanwhile, deep neural networks have also achieved impressive performance in audio processing applications, both as sub-components of larger systems and as complete end-to-end systems by themselves. Despite their empirical successes, comparatively little is understood about how these audio models accomplish these tasks. In this work, we employ a recently developed statistical mechanical theory that connects geometric properties of network representations and the separability of classes to probe how information is untangled within neural networks trained to recognize speech. We observe that speaker-specific nuisance variations are discarded by the network's hierarchy, whereas task-relevant properties such as words and phonemes are untangled in later layers. Higher level concepts such as parts-of-speech and context dependence also emerge in the later layers of the network. Finally, we find that the deep representations carry out significant temporal untangling by efficiently extracting task-relevant features at each time step of the computation. Taken together, these findings shed light on how deep auditory models process time dependent input signals to achieve invariant speech recognition, and show how different concepts emerge through the layers of the network.en_US
dc.language.isoen
dc.relation.isversionofhttps://papers.nips.cc/paper/2019/hash/e2db7186375992e729165726762cb4c1-Abstract.htmlen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleUntangling in Invariant Speech Recognitionen_US
dc.typeArticleen_US
dc.identifier.citationStephenson, Cory, Feather, Jenelle, Padhy, Suchismita, Elibol, Oguz, Tang, Hanlin et al. 2019. "Untangling in Invariant Speech Recognition." Advances in Neural Information Processing Systems, 32.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.contributor.departmentCenter for Brains, Minds, and Machines
dc.relation.journalAdvances in Neural Information Processing Systemsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-03-25T12:35:42Z
dspace.orderedauthorsStephenson, C; Feather, J; Padhy, S; Elibol, O; Tang, H; McDermott, J; Chung, SYen_US
dspace.date.submission2021-03-25T12:35:43Z
mit.journal.volume32en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record