Show simple item record

dc.contributor.advisorLeslie Pack Kaelbling.en_US
dc.contributor.authorLaGrassa, Alex Licari.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-03-24T15:36:27Z
dc.date.available2020-03-24T15:36:27Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/124251
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 73-78).en_US
dc.description.abstractEngineering reinforcement learning agents for application on a particular target domain requires making decisions such as the learning algorithm and state representation. We empirically study the performance of three reference implementations of model-free reinforcement learning algorithms: Covariance Matrix Adaptation Evolution Strategy, Deep Deterministic Policy Gradients, and Proximal Policy Optimization. We compare their performance on various target domains to measure quantitatively their dependence on varied features of the environment. We study the effect of actuation noise, observation noise, reward sparsity and task horizon. Then, we explore automatically generated state encodings for learning using a lower-dimensional encoding from high dimensional sensor data. A proof-of- concept end-to-end system for scooping beads of different sizes in the real world generates, uses, then follows force traces along with a positional controller to execute a scoop.en_US
dc.description.statementofresponsibilityby Alex Licari LaGrassa.en_US
dc.format.extent78 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleSelecting appropriate reinforcement-learning algorithms for robot manipulation domainsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1145122826en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-03-24T15:36:26Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record