One-shot visual appearance learning for mobile manipulation
Author(s)
Walter, Matthew R.; Friedman, Yuli; Antone, Matthew; Teller, Seth
DownloadTeller_One-shot visual appearance.pdf (7.569Mb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We describe a vision-based algorithm that enables a robot to robustly detect specific objects in a scene following an initial segmentation hint from a human user. The novelty lies in the ability to ‘reacquire’ objects over extended spatial and temporal excursions within challenging environments based upon a single training example. The primary difficulty lies in achieving an effective reacquisition capability that is robust to the effects of local clutter, lighting variation, and object relocation. We overcome these challenges through an adaptive detection algorithm that automatically generates multiple-view appearance models for each object online. As the robot navigates within the environment and the object is detected from different viewpoints, the one-shot learner opportunistically and automatically incorporates additional observations into each model. In order to overcome the effects of ‘drift’ common to adaptive learners, the algorithm imposes simple requirements on the geometric consistency of candidate observations. Motivating our reacquisition strategy is our work developing a mobile manipulator that interprets and autonomously performs commands conveyed by a human user. The ability to detect specific objects and reconstitute the user’s segmentation hints enables the robot to be situationally aware. This situational awareness enables rich command and control mechanisms and affords natural interaction. We demonstrate one such capability that allows the human to give the robot a ‘guided tour’ of named objects within an outdoor environment and, hours later, to direct the robot to manipulate those objects by name using spoken instructions. We implemented our appearance-based detection strategy on our robotic manipulator as it operated over multiple days in different outdoor environments. We evaluate the algorithm’s performance under challenging conditions that include scene clutter, lighting and viewpoint variation, object ambiguity, and object relocation. The results demonstrate a reacquisition capability that is effective in real-world settings.
Date issued
2012-04Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
International Journal of Robotics Research
Publisher
Sage Publications
Citation
Walter, M. R. et al. “One-shot Visual Appearance Learning for Mobile Manipulation.” The International Journal of Robotics Research 31.4 (2012): 554–567.
Version: Author's final manuscript
ISSN
0278-3649
1741-3176