| dc.contributor.author | Kollar, Thomas Fleming | |
| dc.contributor.author | Tellex, Stefanie A | |
| dc.contributor.author | Roy, Deb K | |
| dc.contributor.author | Roy, Nicholas | |
| dc.date.accessioned | 2018-04-11T15:17:57Z | |
| dc.date.available | 2018-04-11T15:17:57Z | |
| dc.date.issued | 2014 | |
| dc.identifier.isbn | 978-3-642-28571-4 | |
| dc.identifier.isbn | 978-3-642-28572-1 | |
| dc.identifier.issn | 1610-7438 | |
| dc.identifier.issn | 1610-742X | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/114657 | |
| dc.description.abstract | To be useful teammates to human partners, robots must be able to follow spoken instructions given in natural language. An important class of instructions involve interacting with people, such as “Follow the person to the kitchen” or “Meet the person at the elevators.” These instructions require that the robot fluidly react to changes in the environment, not simply follow a pre-computed plan. We present an algorithm for understanding natural language commands with three components. First, we create a cost function that scores the language according to how well it matches a candidate plan in the environment, defined as the log-likelihood of the plan given the command. Components of the cost function include novel models for the meanings of motion verbs such as “follow,” “meet,” and “avoid,” as well as spatial relations such as “to” and landmark phrases such as “the kitchen.” Second, an inference method uses this cost function to perform forward search, finding a plan that matches the natural language command. Third, a high-level controller repeatedly calls the inference method at each timestep to compute a new plan in response to changes in the environment such as the movement of the human partner or other people in the scene. When a command consists of more than a single task, the controller switches to the next task when an earlier one is satisfied. We evaluate our approach on a set of example tasks that require the ability to follow both simple and complex natural language commands. Keywords: Cost Function; Spatial Relation; State Sequence; Edit Distance; Statistical Machine Translation | en_US |
| dc.description.sponsorship | United States. Office of Naval Research (Grant MURI N00014-07-1-0749) | en_US |
| dc.publisher | Springer Nature | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.1007/978-3-642-28572-1_3 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | Other repository | en_US |
| dc.title | Grounding Verbs of Motion in Natural Language Commands to Robots | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Kollar, Thomas et al. “Grounding Verbs of Motion in Natural Language Commands to Robots.” edited by O. Khatib, V. Kumar and G. Sukhatme. Experimental Robotics (2014): 31–47 © 2014 Springer-Verlag Berlin Heidelberg | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics | en_US |
| dc.contributor.mitauthor | Kollar, Thomas Fleming | |
| dc.contributor.mitauthor | Tellex, Stefanie A | |
| dc.contributor.mitauthor | Roy, Deb K | |
| dc.contributor.mitauthor | Roy, Nicholas | |
| dc.relation.journal | Experimental Robotics | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2018-04-10T14:50:47Z | |
| dspace.orderedauthors | Kollar, Thomas; Tellex, Stefanie; Roy, Deb; Roy, Nicholas | en_US |
| dspace.embargo.terms | N | en_US |
| dc.identifier.orcid | https://orcid.org/0000-0002-4333-7194 | |
| dc.identifier.orcid | https://orcid.org/0000-0002-8293-0492 | |
| mit.license | OPEN_ACCESS_POLICY | en_US |