Learning Midlevel Auditory Codes from Natural Sound Statistics
Author(s)Mlynarski, Wiktor; McDermott, Joshua H.
MetadataShow full item record
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
DepartmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Młynarski, Wiktor, and Josh H. McDermott. “Learning Midlevel Auditory Codes from Natural Sound Statistics.” Neural Computation, vol. 30, no. 3, Mar. 2018, pp. 631–69. © 2018 Massachusetts Institute of Technology
Final published version