Show simple item record

dc.contributor.authorLahner, Benjamin
dc.contributor.authorCichy, Radoslaw Martin
dc.contributor.authorOliva, Aude
dc.contributor.authorCichy, Radoslaw
dc.contributor.authorMohsenzadeh, Yalda
dc.contributor.authorMullin, Caitlin
dc.date.accessioned2019-03-07T19:36:29Z
dc.date.available2019-03-07T19:36:29Z
dc.date.issued2019-02
dc.date.submitted2018-10
dc.identifier.issn2411-5150
dc.identifier.urihttp://hdl.handle.net/1721.1/120822
dc.description.abstractTo build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context. Keywords: spatiotemporal neural dynamics; vision; dorsal and ventral streams; multivariate pattern analysis; representational similarity analysis; fMRI; MEGen_US
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)en_US
dc.relation.isversionofhttp://dx.doi.org/10.3390/vision3010008en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceMultidisciplinary Digital Publishing Instituteen_US
dc.titleReliability and Generalizability of Similarity-Based Fusion of MEG and fMRI Data in Human Ventral and Dorsal Visual Streamsen_US
dc.typeArticleen_US
dc.identifier.citationMohsenzadeh, Yalda et al. "Reliability and Generalizability of Similarity-Based Fusion of MEG and fMRI Data in Human Ventral and Dorsal Visual Streams." Vision 3, 1 (February 2019): 8 © 2019 The Authorsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.mitauthorMohsenzadeh, Yalda
dc.contributor.mitauthorMullin, Caitlin
dc.relation.journalVisionen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-02-15T07:53:46Z
dspace.orderedauthorsMohsenzadeh, Yalda; Mullin, Caitlin; Lahner, Benjamin; Cichy, Radoslaw; Oliva, Audeen_US
dspace.embargo.termsNen_US
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record