Show simple item record

dc.contributor.authorNiv, Yael
dc.contributor.authorDaniel, Reka
dc.contributor.authorGeana, Andra
dc.contributor.authorGershman, Samuel J.
dc.contributor.authorLeong, Yuan Chang
dc.contributor.authorRadulescu, Angela
dc.contributor.authorWilson, Robert C.
dc.date.accessioned2016-01-07T02:17:03Z
dc.date.available2016-01-07T02:17:03Z
dc.date.issued2015-05
dc.date.submitted2015-03
dc.identifier.issn0270-6474
dc.identifier.issn1529-2401
dc.identifier.urihttp://hdl.handle.net/1721.1/100742
dc.description.abstractIn recent years, ideas from the computational field of reinforcement learning have revolutionized the study of learning in the brain, famously providing new, precise theories of how dopamine affects learning in the basal ganglia. However, reinforcement learning algorithms are notorious for not scaling well to multidimensional environments, as is required for real-world learning. We hypothesized that the brain naturally reduces the dimensionality of real-world problems to only those dimensions that are relevant to predicting reward, and conducted an experiment to assess by what algorithms and with what neural mechanisms this “representation learning” process is realized in humans. Our results suggest that a bilateral attentional control network comprising the intraparietal sulcus, precuneus, and dorsolateral prefrontal cortex is involved in selecting what dimensions are relevant to the task at hand, effectively updating the task representation through trial and error. In this way, cortical attention mechanisms interact with learning in the basal ganglia to solve the “curse of dimensionality” in reinforcement learning.en_US
dc.description.sponsorshipNational Institute on Drug Abuse (Award R03DA029073)en_US
dc.description.sponsorshipNational Institute of Mental Health (U.S.) (Award R01MH098861)en_US
dc.language.isoen_US
dc.publisherSociety for Neuroscienceen_US
dc.relation.isversionofhttp://dx.doi.org/10.1523/jneurosci.2978-14.2015en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en_US
dc.sourceSociety for Neuroscienceen_US
dc.titleReinforcement Learning in Multidimensional Environments Relies on Attention Mechanismsen_US
dc.typeArticleen_US
dc.identifier.citationNiv, Y., R. Daniel, A. Geana, S. J. Gershman, Y. C. Leong, A. Radulescu, and R. C. Wilson. “Reinforcement Learning in Multidimensional Environments Relies on Attention Mechanisms.” Journal of Neuroscience 35, no. 21 (May 27, 2015): 8145–8157.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.mitauthorGershman, Samuel J.en_US
dc.relation.journalJournal of Neuroscienceen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsNiv, Y.; Daniel, R.; Geana, A.; Gershman, S. J.; Leong, Y. C.; Radulescu, A.; Wilson, R. C.en_US
mit.licensePUBLISHER_CCen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record