Show simple item record

dc.contributor.authorBaker, Christopher Lawrence
dc.contributor.authorSaxe, Rebecca R.
dc.contributor.authorTenenbaum, Joshua B.
dc.date.accessioned2011-01-28T18:34:39Z
dc.date.available2011-01-28T18:34:39Z
dc.date.issued2009-07
dc.date.submitted2009-06
dc.identifier.issn0010-0277
dc.identifier.urihttp://hdl.handle.net/1721.1/60852
dc.description.abstractHumans are adept at inferring the mental states underlying other agents’ actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents’ behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent’s behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an “intentional stance” [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a “teleological stance” [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165–193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (AFOSR MURI Contract FA9550-05-1-0321)en_US
dc.description.sponsorshipJames S. McDonnell Foundation (Causal Learning Collaborative Initiative)en_US
dc.description.sponsorshipNational Science Foundation (U.S.). Graduate Research Fellowship Programen_US
dc.language.isoen_US
dc.publisherElsevieren_US
dc.relation.isversionofhttp://dx.doi.org/10.1016/j.cognition.2009.07.005en_US
dc.rightsAttribution-Noncommercial-Share Alike 3.0 Unporteden_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT web domainen_US
dc.titleAction understanding as inverse planningen_US
dc.typeArticleen_US
dc.identifier.citationBaker, Chris L., Rebecca Saxe, and Joshua B. Tenenbaum. “Action understanding as inverse planning.” Cognition 113.3 (2009): 329-349.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciencesen_US
dc.contributor.approverSaxe, Rebecca R.
dc.contributor.mitauthorBaker, Christopher Lawrence
dc.contributor.mitauthorSaxe, Rebecca R.
dc.contributor.mitauthorTenenbaum, Joshua B.
dc.relation.journalCognitionen_US
dc.eprint.versionAuthor's final manuscript
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsBaker, Chris L.; Saxe, Rebecca; Tenenbaum, Joshua B.en
dc.identifier.orcidhttps://orcid.org/0000-0003-2377-1791
dc.identifier.orcidhttps://orcid.org/0000-0002-1925-2035
dc.identifier.orcidhttps://orcid.org/0000-0001-7870-4487
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record