Show simple item record

dc.contributor.authorMasutomi, Keiko
dc.contributor.authorBarascud, Nicolas
dc.contributor.authorKashino, Makio
dc.contributor.authorChait, Maria
dc.contributor.authorMcDermott, Josh
dc.date.accessioned2016-05-13T16:31:57Z
dc.date.available2016-05-13T16:31:57Z
dc.date.issued2015-10
dc.date.submitted2015-08
dc.identifier.issn1939-1277
dc.identifier.issn0096-1523
dc.identifier.urihttp://hdl.handle.net/1721.1/102477
dc.description.abstractThe segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a “decoy” task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention.en_US
dc.description.sponsorshipJames S. McDonnell Foundation (Scholar Award)en_US
dc.language.isoen_US
dc.publisherAmerican Psychological Association (APA)en_US
dc.relation.isversionofhttp://dx.doi.org/10.1037/xhp0000147en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/en_US
dc.sourceJournal of Experimental Psychologyen_US
dc.titleSound segregation via embedded repetition is robust to inattentionen_US
dc.typeArticleen_US
dc.identifier.citationMasutomi, Keiko, Nicolas Barascud, Makio Kashino, Josh H. McDermott, and Maria Chait. “Sound Segregation via Embedded Repetition Is Robust to Inattention.” Journal of Experimental Psychology: Human Perception and Performance 42, no. 3 (2016): 386–400.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.mitauthorMcDermott, Joshen_US
dc.relation.journalJournal of Experimental Psychology: Human Perception and Performanceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsMasutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H.; Chait, Mariaen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3965-2503
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record