Show simple item record

dc.contributor.authorWalter, Matthew R.
dc.contributor.authorFriedman, Yuli
dc.contributor.authorAntone, Matthew
dc.contributor.authorTeller, Seth
dc.date.accessioned2011-06-02T16:10:32Z
dc.date.available2011-06-02T16:10:32Z
dc.date.issued2010-06
dc.identifier.isbn978-1-4244-7029-7
dc.identifier.otherINSPEC Accession Number: 11466676
dc.identifier.urihttp://hdl.handle.net/1721.1/63168
dc.description.abstractThis paper describes an algorithm enabling a human supervisor to convey task-level information to a robot by using stylus gestures to circle one or more objects within the field of view of a robot-mounted camera. These gestures serve to segment the unknown objects from the environment. Our method's main novelty lies in its use of appearance-based object “reacquisition” to reconstitute the supervisory gestures (and corresponding segmentation hints), even for robot viewpoints spatially and/or temporally distant from the viewpoint underlying the original gesture. Reacquisition is particularly challenging within relatively dynamic and unstructured environments. The technical challenge is to realize a reacquisition capability robust enough to appearance variation to be useful in practice. Whenever the supervisor indicates an object, our system builds a feature-based appearance model of the object. When the object is detected from subsequent viewpoints, the system automatically and opportunistically incorporates additional observations, revising the appearance model and reconstituting the rough contours of the original circling gesture around that object. Our aim is to exploit reacquisition in order to both decrease the user burden of task specification and increase the effective autonomy of the robot. We demonstrate and analyze the approach on a robotic forklift designed to approach, manipulate, transport and place palletized cargo within an outdoor warehouse. We show that the method enables gesture reuse over long timescales and robot excursions (tens of minutes and hundreds of meters).en_US
dc.description.sponsorshipUnited States. Dept. of the Air Force (Air Force Contract FA8721-05-C-0002)en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers / IEEE Computer Societyen_US
dc.relation.isversionofhttp://dx.doi.org/10.1109/CVPRW.2010.5543614en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceIEEEen_US
dc.titleAppearance-based object reacquisition for mobile manipulationen_US
dc.typeArticleen_US
dc.identifier.citationWalter, M.R. et al. “Appearance-based object reacquisition for mobile manipulation.” Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. 2010. 1-8. Copyright © 2010, IEEEen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.approverTeller, Seth
dc.contributor.mitauthorWalter, Matthew R.
dc.contributor.mitauthorTeller, Seth
dc.relation.journalIEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010 : San Francisco, Calif.). Workshops.en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
dspace.orderedauthorsWalter, Matthew R.; Friedman, Yuli; Antone, Matthew; Teller, Sethen
mit.licensePUBLISHER_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record