dc.contributor.author | Barbu, Andrei | |
dc.contributor.author | Narayanaswamy, Siddharth | |
dc.contributor.author | Xiong, Caiming | |
dc.contributor.author | Corso, Jason J. | |
dc.contributor.author | Fellbaum, Christiane D. | |
dc.contributor.author | Hanson, Catherine | |
dc.contributor.author | Hanson, Stephen Jose | |
dc.contributor.author | Helie, Sebastien | |
dc.contributor.author | Malaia, Evguenia | |
dc.contributor.author | Pearlmutter, Barak A. | |
dc.contributor.author | Siskind, Jeffrey Mark | |
dc.contributor.author | Talavage, Thomas Michael | |
dc.contributor.author | Wilbur, Ronnie B. | |
dc.date.accessioned | 2015-12-10T18:58:04Z | |
dc.date.available | 2015-12-10T18:58:04Z | |
dc.date.issued | 2014-07-14 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/100175 | |
dc.description.abstract | How does the human brain represent simple compositions of constituents: actors, verbs, objects, directions, and locations? Subjects viewed videos during neuroimaging (fMRI) sessions from which sentential descriptions of those videos were identified by decoding the brain representations based only on their fMRI activation patterns. Constituents (e.g., fold and shirt) were independently decoded from a single presentation. Independent constituent classification was then compared to joint classification of aggregate concepts (e.g., fold -shirt); results were similar as measured by accuracy and correlation. The brain regions used for independent constituent classification are largely disjoint and largely cover those used for joint classification. This allows recovery of sentential descriptions of stimulus videos by composing the results of the independent constituent classifiers. Furthermore, classifiers trained on the words one set of subjects think of when watching a video can recognize sentences a different subject thinks of when watching a different video. | en_US |
dc.description.sponsorship | This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM), arXiv | en_US |
dc.relation.ispartofseries | CBMM Memo Series;011 | |
dc.rights | Attribution-NonCommercial 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/3.0/us/ | * |
dc.subject | Computer Language | en_US |
dc.subject | Linguistics | en_US |
dc.subject | Language | en_US |
dc.subject | Neuroscience | en_US |
dc.subject | Vision and Language | en_US |
dc.title | The Compositional Nature of Event Representations in the Human Brain | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |
dc.identifier.citation | arXiv:1505.06670v1 | en_US |