Neurocomputational Modeling of Human Physical Scene Understanding
Author(s)
Yildirim, Ilker; Smith, Kevin A; Belledonne, Mario E; Wu, Jiajun; Tenenbaum, Joshua B
DownloadAccepted version (1.563Mb)
Terms of use
Metadata
Show full item recordAbstract
Human scene understanding involves not just localizing objects,but also inferring latent attributes that affect how the scene mightunfold, such as the masses of objects within the scene. Theseattributes can sometimes only be inferred from the dynamicsof a scene, but people can flexibly integrate this information toupdate their inferences. Here we propose a neurally plausibleEfficient Physical Inferencemodel that can generate and updateinferences from videos. This model makes inferences over theinputs to a generative model of physics and graphics, usingan LSTM based recognition network to efficiently approximaterational probabilistic conditioning. We find that this model notonly rapidly and accurately recovers latent object information,but also that its inferences evolve with more information in away similar to human judgments. The model provides a testablehypothesis about the population-level activity in brain regionsunderlying physical reasoning.
Date issued
2018-09Department
Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences; Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; McGovern Institute for Brain Research at MITJournal
CCN 2018 Conference on Cognitive Computational Neuroscience
Publisher
Cognitive Computational Neuroscience
Citation
Yildirim, Ilker et al. “Neurocomputational Modeling of Human Physical Scene Understanding.” Paper presented at the CCN 2018 Conference on Cognitive Computational Neuroscience, Philadelphia, Pennsylvania, 5-8 September 2018, Cognitive Computational Neuroscience © 2018 The Author(s)
Version: Author's final manuscript