Show simple item record

dc.contributor.authorDeen, Ben
dc.contributor.authorSaxe, Rebecca
dc.contributor.authorKanwisher, Nancy
dc.date.accessioned2021-10-27T20:23:34Z
dc.date.available2021-10-27T20:23:34Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/135465
dc.description.abstract© 2020 Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus (“fSTS”) also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
dc.language.isoen
dc.publisherElsevier BV
dc.relation.isversionof10.1016/J.NEUROIMAGE.2020.117191
dc.rightsCreative Commons Attribution 4.0 International license
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceElsevier
dc.titleProcessing communicative facial and vocal cues in the superior temporal sulcus
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.contributor.departmentMcGovern Institute for Brain Research at MIT
dc.relation.journalNeuroImage
dc.eprint.versionFinal published version
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2021-03-19T14:47:33Z
dspace.orderedauthorsDeen, B; Saxe, R; Kanwisher, N
dspace.date.submission2021-03-19T14:47:35Z
mit.journal.volume221
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record