MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Hybrid learning for multi-step manipulation in collaborative robotics

Author(s)
Pérez D'Arpino, Claudia.
Thumbnail
Download1124761409-MIT.pdf (20.79Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Julie A. Shah.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
I envision robots that can LEARN a model of the steps and the goal of a constrained multi-step manipulation task by observing human examples of the task, that are flexible enough to COLLABORATE with a human teammate to execute this task, and that are able to DISCOVER their own new strategy for performing the task in a manner that adapts well to unmodeled aspects of the physical world. In this thesis I formulate models and algorithms for hybrid learning, a framework in which a robot learns manipulation tasks by combining observational and self-learning, and develop a learning and collaboration workflow in the context of remote manipulation in shared autonomy. I show experimentally that this collaborative workflow significantly improves performance over other systems for remote manipulation. LEARN: I first present C-LEARN, an algorithm that enables robot learning of multi-step manipulation tasks from a single human demonstration.
 
I consider quasi-static tasks that are geometrically constrained. The robot uses demonstrations to formulate a task representation in terms of keyframes and geometric constraints than can be used by a motion planner to solve a new instance of the task. This work addresses the technical gap between learning from demonstrations and motion planning, effectively increasing the complexity of manipulation tasks that end users without programming experience can teach robots. COLLABORATE: Second, I present the integration of C-LEARN into a collaborative workflow for remote manipulation. This model is evaluated through a user study that compares four architectures for remote manipulation with expert operators. The proposed method results in task times comparable to direct teleoperation while increasing the accuracy of the execution.
 
DISCOVER: Finally, I present the hybrid learning framework for discovering novel strategies for multi-step manipulation, by combining learning from demonstrations and self-learning through exploration in a simulation. I demonstrate my approach by tasking a robot to manipulate blocks and assemble a stable structure. While the desired geometry is specified by the example, the underlying physics is unobservable. The robot uses Monte Carlo Tree Search (MCTS) with interleaved task and motion planning in simulation to find a robust strategy to accomplish the task.
 
Description
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 131-140).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/122740
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.