Show simple item record

dc.contributor.authorPaul, Rohan
dc.contributor.authorBarbu, Andrei
dc.contributor.authorFelshin, Sue
dc.contributor.authorKatz, Boris
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2018-05-30T17:00:38Z
dc.date.available2018-05-30T17:00:38Z
dc.date.issued2017-08
dc.identifier.isbn9780999241103
dc.identifier.urihttp://hdl.handle.net/1721.1/115972
dc.description.abstractA robot's ability to understand or ground natural language instructions is fundamentally tied to its knowledge about the surrounding world. We present an approach to grounding natural language utterances in the context of factual information gathered through natural-language interactions and past visual observations. A probabilistic model estimates, from a natural language utterance, the objects, relations, and actions that the utterance refers to, the objectives for future robotic actions it implies, and generates a plan to execute those actions while updating a state representation to include newly acquired knowledge from the visual-linguistic context. Grounding a command necessitates a representation for past observations and interactions; however, maintaining the full context consisting of all possible observed objects, attributes, spatial relations, actions, etc., over time is intractable. Instead, our model, Temporal Grounding Graphs, maintains a learned state representation for a belief over factual groundings, those derived from natural-language interactions, and lazily infers new groundings from visual observations using the context implied by the utterance. This work significantly expands the range of language that a robot can understand by incorporating factual knowledge and observations of its workspace in its inference about the meaning and grounding of natural-language utterances.en_US
dc.description.sponsorshipToyota Research Institute (Award Number LP-C000765-SR)en_US
dc.description.sponsorshipNational Science Foundation (U.S.). Science and Technology Center (award CCF-1231216)en_US
dc.description.sponsorshipAir Force Research Laboratory (Wright-Patterson Air Force Base, Ohio) (contract no. FA8750-15-C-0010)en_US
dc.publisherInternational Joint Conferences on Artificial Intelligenceen_US
dc.relation.isversionofhttp://dx.doi.org/10.24963/IJCAI.2017/629en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther repositoryen_US
dc.titleTemporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Contexten_US
dc.typeArticleen_US
dc.identifier.citationPaul, Rohan, Andrei Barbu, Sue Felshin, Boris Katz, and Nicholas Roy. “Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context.” Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (August 2017).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.mitauthorPaul, Rohan
dc.contributor.mitauthorBarbu, Andrei
dc.contributor.mitauthorKatz, Boris
dc.contributor.mitauthorRoy, Nicholas
dc.relation.journalProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligenceen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-04-09T17:51:05Z
dspace.orderedauthorsPaul, Rohan; Barbu, Andrei; Felshin, Sue; Katz, Boris; Roy, Nicholasen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-9693-2237
dc.identifier.orcidhttps://orcid.org/0000-0001-7626-9266
dc.identifier.orcidhttps://orcid.org/0000-0002-8293-0492
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record