| dc.contributor.author | Walter, Matthew R. | |
| dc.contributor.author | Friedman, Yuli | |
| dc.contributor.author | Antone, Matthew | |
| dc.contributor.author | Teller, Seth | |
| dc.date.accessioned | 2011-06-02T16:10:32Z | |
| dc.date.available | 2011-06-02T16:10:32Z | |
| dc.date.issued | 2010-06 | |
| dc.identifier.isbn | 978-1-4244-7029-7 | |
| dc.identifier.other | INSPEC Accession Number: 11466676 | |
| dc.identifier.uri | http://hdl.handle.net/1721.1/63168 | |
| dc.description.abstract | This paper describes an algorithm enabling a human supervisor to convey task-level information to a robot by using stylus gestures to circle one or more objects within the field of view of a robot-mounted camera. These gestures serve to segment the unknown objects from the environment. Our method's main novelty lies in its use of appearance-based object “reacquisition” to reconstitute the supervisory gestures (and corresponding segmentation hints), even for robot viewpoints spatially and/or temporally distant from the viewpoint underlying the original gesture. Reacquisition is particularly challenging within relatively dynamic and unstructured environments. The technical challenge is to realize a reacquisition capability robust enough to appearance variation to be useful in practice. Whenever the supervisor indicates an object, our system builds a feature-based appearance model of the object. When the object is detected from subsequent viewpoints, the system automatically and opportunistically incorporates additional observations, revising the appearance model and reconstituting the rough contours of the original circling gesture around that object. Our aim is to exploit reacquisition in order to both decrease the user burden of task specification and increase the effective autonomy of the robot. We demonstrate and analyze the approach on a robotic forklift designed to approach, manipulate, transport and place palletized cargo within an outdoor warehouse. We show that the method enables gesture reuse over long timescales and robot excursions (tens of minutes and hundreds of meters). | en_US |
| dc.description.sponsorship | United States. Dept. of the Air Force (Air Force Contract FA8721-05-C-0002) | en_US |
| dc.language.iso | en_US | |
| dc.publisher | Institute of Electrical and Electronics Engineers / IEEE Computer Society | en_US |
| dc.relation.isversionof | http://dx.doi.org/10.1109/CVPRW.2010.5543614 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | IEEE | en_US |
| dc.title | Appearance-based object reacquisition for mobile manipulation | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Walter, M.R. et al. “Appearance-based object reacquisition for mobile manipulation.” Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. 2010. 1-8. Copyright © 2010, IEEE | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.contributor.approver | Teller, Seth | |
| dc.contributor.mitauthor | Walter, Matthew R. | |
| dc.contributor.mitauthor | Teller, Seth | |
| dc.relation.journal | IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010 : San Francisco, Calif.). Workshops. | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| dspace.orderedauthors | Walter, Matthew R.; Friedman, Yuli; Antone, Matthew; Teller, Seth | en |
| mit.license | PUBLISHER_POLICY | en_US |
| mit.metadata.status | Complete | |