Show simple item record

dc.contributor.authorKollar, Thomas Fleming
dc.contributor.authorTellex, Stefanie A
dc.contributor.authorRoy, Deb K
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2018-04-11T15:17:57Z
dc.date.available2018-04-11T15:17:57Z
dc.date.issued2014
dc.identifier.isbn978-3-642-28571-4
dc.identifier.isbn978-3-642-28572-1
dc.identifier.issn1610-7438
dc.identifier.issn1610-742X
dc.identifier.urihttp://hdl.handle.net/1721.1/114657
dc.description.abstractTo be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. An important class of instructions involve interacting with people, such as “Follow the person to the kitchen” or “Meet the person at the elevators.” These instructions require that the robot fluidly react to changes in the environment, not simply follow a pre-computed plan. We present an algorithm for understanding natural language commands with three components. First, we create a cost function that scores the language according to how well it matches a candidate plan in the environment, defined as the log-likelihood of the plan given the command. Components of the cost function include novel models for the meanings of motion verbs such as “follow,” “meet,” and “avoid,” as well as spatial relations such as “to” and landmark phrases such as “the kitchen.” Second, an inference method uses this cost function to perform forward search, finding a plan that matches the natural language command. Third, a high-level controller repeatedly calls the inference method at each timestep to compute a new plan in response to changes in the environment such as the movement of the human partner or other people in the scene. When a command consists of more than a single task, the controller switches to the next task when an earlier one is satisfied. We evaluate our approach on a set of example tasks that require the ability to follow both simple and complex natural language commands. Keywords: Cost Function; Spatial Relation; State Sequence; Edit Distance; Statistical Machine Translationen_US
dc.description.sponsorshipUnited States. Office of Naval Research (Grant MURI N00014-07-1-0749)en_US
dc.publisherSpringer Natureen_US
dc.relation.isversionofhttp://dx.doi.org/10.1007/978-3-642-28572-1_3en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceOther repositoryen_US
dc.titleGrounding Verbs of Motion in Natural Language Commands to Robotsen_US
dc.typeArticleen_US
dc.identifier.citationKollar, Thomas et al. “Grounding Verbs of Motion in Natural Language Commands to Robots.” edited by O. Khatib, V. Kumar and G. Sukhatme. Experimental Robotics (2014): 31–47 © 2014 Springer-Verlag Berlin Heidelbergen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.mitauthorKollar, Thomas Fleming
dc.contributor.mitauthorTellex, Stefanie A
dc.contributor.mitauthorRoy, Deb K
dc.contributor.mitauthorRoy, Nicholas
dc.relation.journalExperimental Roboticsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2018-04-10T14:50:47Z
dspace.orderedauthorsKollar, Thomas; Tellex, Stefanie; Roy, Deb; Roy, Nicholasen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-4333-7194
dc.identifier.orcidhttps://orcid.org/0000-0002-8293-0492
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record