| dc.contributor.author | Mlynarski, Wiktor | |
| dc.contributor.author | McDermott, Joshua H. | |
| dc.date.accessioned | 2018-04-03T14:49:06Z | |
| dc.date.available | 2018-04-03T14:49:06Z | |
| dc.date.issued | 2018-02 | |
| dc.identifier.issn | 0899-7667 | |
| dc.identifier.issn | 1530-888X | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/114502 | |
| dc.description.abstract | Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation. | en_US |
| dc.description.sponsorship | National Science Foundation (U.S.) (McGovern Institute for Brain Research at MIT. Center for Brains, Minds, and Machines. STC Award CCF-1231216) | en_US |
| dc.description.sponsorship | James S. McDonnell Foundation (Scholar Award) | en_US |
| dc.publisher | MIT Press | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.1162/neco_a_01048 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | Massachusetts Institute of Technology Press | en_US |
| dc.title | Learning Midlevel Auditory Codes from Natural Sound Statistics | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Młynarski, Wiktor, and Josh H. McDermott. “Learning Midlevel Auditory Codes from Natural Sound Statistics.” Neural Computation, vol. 30, no. 3, Mar. 2018, pp. 631–69. © 2018 Massachusetts Institute of Technology | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences | en_US |
| dc.contributor.mitauthor | Mlynarski, Wiktor | |
| dc.contributor.mitauthor | McDermott, Joshua H. | |
| dc.relation.journal | Neural Computation | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2018-02-23T19:55:53Z | |
| dspace.orderedauthors | Młynarski, Wiktor; McDermott, Josh H. | en_US |
| dspace.embargo.terms | N | en_US |
| dc.identifier.orcid | https://orcid.org/0000-0002-3791-5656 | |
| dc.identifier.orcid | https://orcid.org/0000-0002-3965-2503 | |
| mit.license | PUBLISHER_POLICY | en_US |