Show simple item record

dc.contributor.authorMcWalter, Richard I
dc.contributor.authorMcDermott, Joshua Hartman
dc.date.accessioned2020-07-20T22:28:08Z
dc.date.available2020-07-20T22:28:08Z
dc.date.issued2018-04
dc.date.submitted2018-01
dc.identifier.issn0960-9822
dc.identifier.issn1879-0445
dc.identifier.urihttps://hdl.handle.net/1721.1/126271
dc.description.abstractTo overcome variability, estimate scene characteristics, and compress sensory input, perceptual systems pool data into statistical summaries. Despite growing evidence for statistical representations in perception, the underlying mechanisms remain poorly understood. One example of such representations occurs in auditory scenes, where background texture appears to be represented with time-averaged sound statistics. We probed the averaging mechanism using “texture steps”—textures containing subtle shifts in stimulus statistics. Although generally imperceptible, steps occurring in the previous several seconds biased texture judgments, indicative of a multi-second averaging window. Listeners seemed unable to willfully extend or restrict this window but showed signatures of longer integration times for temporally variable textures. In all cases the measured timescales were substantially longer than previously reported integration times in the auditory system. Integration also showed signs of being restricted to sound elements attributed to a common source. The results suggest an integration process that depends on stimulus characteristics, integrating over longer extents when it benefits statistical estimation of variable signals and selectively integrating stimulus components likely to have a common cause in the world. Our methodology could be naturally extended to examine statistical representations of other types of sensory signals. Sound texture perception is thought to be mediated by time-averaged sound statistics. McWalter and McDermott use texture “steps” to reveal an obligatory multi-second averaging process whose extent depends on texture variability. Averaging excludes other concurrent sounds, implicating texture perception as inseparable from auditory scene analysis. ©2018 Elsevier Ltden_US
dc.language.isoen
dc.publisherElsevier BVen_US
dc.relation.isversionofhttps://dx.doi.org/10.1016/J.CUB.2018.03.049en_US
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs Licenseen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.sourcePMCen_US
dc.titleAdaptive and Selective Time Averaging of Auditory Scenesen_US
dc.typeArticleen_US
dc.identifier.citationMcWalter, Richard and Josh H. McDermott, "Adaptive and Selective Time Averaging of Auditory Scenes." Current Biology 28, 9 (May 2018): p. 1405-1418.e10 doi. 10.1016/j.cub.2018.03.049 ©2018 Authorsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.relation.journalCurrent Biologyen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-10-03T12:43:24Z
dspace.date.submission2019-10-03T12:43:29Z
mit.journal.volume28en_US
mit.journal.issue9en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record