Show simple item record

dc.contributor.advisorLozano-Pérez, Tomás
dc.contributor.advisorKaelbling, Leslie Pack
dc.contributor.authorMoses, Caris
dc.date.accessioned2022-08-29T16:03:59Z
dc.date.available2022-08-29T16:03:59Z
dc.date.issued2022-05
dc.date.submitted2022-06-21T19:15:15.418Z
dc.identifier.urihttps://hdl.handle.net/1721.1/144676
dc.description.abstractManipulation tasks such as construction and assembly require reasoning over complex object interactions. In order to successfully plan for, execute, and achieve a given task, these interactions must be modeled accurately and capture low-level dynamics. Some examples include modeling how a constrained object (such as a door) moves when grasped, the conditions under which an object will rest stably on another, or the friction constraints that allow an object to be pushed by another object. Acquiring models of object interactions for planning is a challenge. Existing engineering methods fail to accurately capture how an object’s properties such as friction, shape, and mass distribution, effect the success of actions such as pushing and stacking. Therefore, in this work we leverage machine learning as a data-driven approach to acquiring action models, with the hope that one day a robot equipped with a learning strategy and some basic understanding of the world could learn composable action models useful for planning to achieve a myriad of tasks. We see this work as a small step in this direction. Acquiring accurate models through a data-driven approach requires the robot to conduct a vast amount of information-rich interactions in the world. Collecting data on both real and simulated platforms can be time and cost prohibitive. In this work we take an active learning approach to aid the robot in finding the small subspace of informative actions within the large action space it has available to explore (all motions, grasps, and object interactions). Additionally, we supply the robot with optimistic action models, which are a relaxation of the true dynamics models. These models provide structure by constraining the exploration space in order to improve learning efficiency. Optimistic action models have the additional benefit of being easier to specify than fully accurate action models. We are generally interested in the scenario in which a robot is given an initial (optimistic) action model, an active learning strategy, and a space of domain-specific problems to generalize over. First, we give a method for learning task models in a bandit problem setting for constrained mechanisms. Our method, Contextual Prior Prediction, enables quick task success at evaluation time through the use of a learned vision-based prior. Then, we give a novel active learning strategy, Sequential Actions, for learning action models for long-horizon manipulation tasks in a block stacking domain. Finally, we give results in a tool use domain for our Sequential Goals method which improves upon Sequential Actions by exploring goal-directed plans at training time.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleOptimistic Active Learning of Task and Action Models for Robotic Manipulation
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.orcidhttps://orcid.org/0000-0002-6617-616X
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record