Show simple item record

dc.contributor.advisorJulie A. Shah.en_US
dc.contributor.authorGutierrez, Reymundo A. (Reymundo Alejandro)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-01-12T21:00:13Z
dc.date.available2018-01-12T21:00:13Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/113153
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 79-82).en_US
dc.description.abstractWhen people plan their motions for dexterous work, they implicitly consider the next likely step in the action sequence. Almost without conscious thought, we select a grasp that meets the implicit constraints related to the task to be performed. A robot tasked with dexterous manipulation should likewise aim to grasp the intended object in a way that makes the next step straightforward. In some cases, lack of consideration of these implicit constraints can result in situations in which the object cannot be manipulated in the desired manner. While recent work has begun to address task dependent constraints, they require direct specification of task constraints or rely on grasp datasets with manually defined task labels. In this thesis, we present a framework that leverages human demonstration to learn task-oriented grasp heuristics for a set of known objects in an unsupervised manner and defined a procedure to instantiate grasps from these learned models. Equating distinct motion profiles with the execution of distinct tasks, our approach leverages the motion during human demonstration in order to partition the accompanying grasp examples into tasks in an unsupervised manner through the incorporation of unsupervised motion clustering algorithms into a grasp learning pipeline. In order to evaluate the framework, a set of human demonstrations of real world manipulation tasks were collected. The framework with unsupervised task clustering produced comparable results to the semi-supervised condition. This translated to the discovery of the correct relationship between the tasks and objects, with the distributions of the resultant grasp point models following intuitive heuristic rules (e.g. handle grasps for tools). The grasps instantiated from these grasp models followed the learned heuristics, but had some limitations due to the choice of grasp model and the instantiation method utilized. Overall, this work demonstrates that the inclusion of unsupervised motion clustering techniques into a grasp learning pipeline can assist in the production of task-oriented models without the typical overhead of direct task constraint encoding or hand labeling of datasets.en_US
dc.description.statementofresponsibilityby Reymundo A. Gutierrez.en_US
dc.format.extent82 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleLearning task-oriented grasp heuristics from demonstrationen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1018307003en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record