The neural architecture of theory-based reinforcement learning
Author(s)
Tomov, Momchil S; Tsividis, Pedro A; Pouncy, Thomas; Tenenbaum, Joshua B; Gershman, Samuel J
DownloadSubmitted version (15.98Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Humans learn internal models of the world that support planning and generalization in complex environments. Yet it remains unclear how such internal models are represented and learned in the brain. We approach this question using theory-based reinforcement learning, a strong form of model-based reinforcement learning in which the model is a kind of intuitive theory. We analyzed fMRI data from human participants learning to play Atari-style games. We found evidence of theory representations in prefrontal cortex and of theory updating in prefrontal cortex, occipital cortex, and fusiform gyrus. Theory updates coincided with transient strengthening of theory representations. Effective connectivity during theory updating suggests that information flows from prefrontal theory-coding regions to posterior theory-updating regions. Together, our results are consistent with a neural architecture in which top-down theory representations originating in prefrontal regions shape sensory predictions in visual areas, where factored theory prediction errors are computed and trigger bottom-up updates of the theory.
Date issued
2023-03Department
Massachusetts Institute of Technology. Department of Brain and Cognitive SciencesJournal
Neuron
Publisher
Elsevier BV
Citation
Tomov, Momchil S, Tsividis, Pedro A, Pouncy, Thomas, Tenenbaum, Joshua B and Gershman, Samuel J. 2023. "The neural architecture of theory-based reinforcement learning." Neuron.
Version: Original manuscript