Show simple item record

dc.contributor.authorWei, Yuan
dc.contributor.authorBrunskill, Emma
dc.contributor.authorKollar, Thomas Fleming
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2011-10-03T20:28:33Z
dc.date.available2011-10-03T20:28:33Z
dc.date.issued2009-05
dc.identifier.isbn978-1-4244-2788-8
dc.identifier.issn1050-4729
dc.identifier.urihttp://hdl.handle.net/1721.1/66168
dc.description.abstractAn important component of human-robot interaction is that people need to be able to instruct robots to move to other locations using naturally given directions. When giving directions, people often make mistakes such as labelling errors (e.g., left vs. right) and errors of omission (skipping important decision points in a sequence). Furthermore, people often use multiple levels of granularity in specifying directions, referring to locations using single object landmarks, multiple landmarks in a given location, or identifying large regions as a single location. The challenge is to identify the correct path to a destination from a sequence of noisy, possibly erroneous directions. In our work we cast this problem as probabilistic inference: given a set of directions, an agent should automatically find the path with the geometry and physical appearance to maximize the likelihood of those directions. We use a specific variant of a Markov Random Field (MRF) to represent our model, and gather multi-granularity representation information using existing large tagged datasets. On a dataset of route directions collected in a large third floor university building, we found that our algorithm correctly inferred the true final destination in 47 out of the 55 cases successfully followed by humans volunteers. These results suggest that our algorithm is performing well relative to human users. In the future this work will be included in a broader system for autonomously constructing environmental representations that support natural human-robot interaction for direction giving.en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (Agile Robotics project, contract number 7000038334)en_US
dc.description.sponsorshipNational Science Foundation (U.S.) (NSF Division of Information and Intelligent Systems under grant # 0546467)en_US
dc.description.sponsorshipMassachusetts Institute of Technology (Hugh Hampton Young Memorial Fund Fellowship)en_US
dc.description.sponsorshipUnited States. Office of Naval Research (MURI N00014-07-1-0749)en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineersen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/ROBOT.2009.5152775en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceIEEEen_US
dc.titleWhere to go: Interpreting natural directions using global inferenceen_US
dc.typeArticleen_US
dc.identifier.citationYuan Wei et al. “Where to go: Interpreting natural directions using global inference.” Robotics and Automation, 2009. ICRA’09. IEEE International Conference on. 2009. 3761-3767. © 2009 IEEE.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.approverRoy, Nicholas
dc.contributor.mitauthorWei, Yuan
dc.contributor.mitauthorBrunskill, Emma
dc.contributor.mitauthorKollar, Thomas Fleming
dc.contributor.mitauthorRoy, Nicholas
dc.relation.journalIEEE International Conference on Robotics and Automation, 2009. ICRA '09en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
dspace.orderedauthorsYuan Wei; Brunskill, E.; Kollar, T.; Roy, N.en
dc.identifier.orcidhttps://orcid.org/0000-0002-8293-0492
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record