Show simple item record

dc.contributor.advisorAntonio Torralba and Russ Tedrake.en_US
dc.contributor.authorLi, Yunzhu(Scientist in electrical engineering and computer science)Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:53:33Z
dc.date.available2020-09-15T21:53:33Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127352
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 55-58).en_US
dc.description.abstractCompared with off-the-shelf physics engines, a learnable simulator has a stronger ability to adapt to unseen objects, scenes, and tasks. However, existing models like Interaction Networks only work for fully observable systems; they also only consider pairwise interactions within a single time step, both restricting their use in practical systems. We introduce Propagation Networks (PropNets), a differentiable, learnable dynamics model that handles partially observable scenarios and enables instantaneous propagation of signals beyond pairwise interactions. In the second half of the thesis, I will discuss our attempt to extend PropNets to learn a particle-based simulator for handling matters of various substances--rigid or soft bodies, liquid, gas--each with distinct physical behaviors. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. We demonstrate that our models not only outperform current learnable physics engines in forward simulation, but also achieve superior performance on various control tasks, such as manipulating a pile of boxes, a cup of water, and a deformable foam, with experiments both in simulation and in the real world. Compared with existing model-free deep reinforcement learning algorithms, model-based control with our models is also more accurate, efficient, and generalizable to new, partially observable scenes and tasks.en_US
dc.description.statementofresponsibilityby Yunzhu Li.en_US
dc.format.extent58 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleLearning compositional dynamics models for model-based controlen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192486739en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:53:31Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record