Show simple item record

dc.contributor.advisorPaul F. Mende.en_US
dc.contributor.authorHao, Parker(Parker Ruochen)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:56:03Z
dc.date.available2020-09-15T21:56:03Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127403
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 19-20).en_US
dc.description.abstractExploration technique is the key to reach optimal results via reinforcement learning in a time-ecient manner. When reinforcement learning was first proposed, exploration was implemented as randomly choosing across the action space, resulting in potentially exponential number of state-action pairs to explore from. Over the years, more ecient exploration techniques were proposed, allowing faster convergence and delivering better results across different domains of applications. With the growing interest in non-stationary environments, some of those exploration techniques are explored where the optimal state-action changes across dierent periods of learning process. In the past, those techniques have performed well in control setups where the targets are non-stationary and continuously moving. However, such techniques have not been extensively tested in environments involving jumps or non-continuous regime changes. This paper analyzes methods for achieving comparable exploration performance under such challenging environments and proposes new techniques for the agent to capture the regime changes of non-stationary environments as more complex states or intrinsic rewards.en_US
dc.description.statementofresponsibilityby Parker Hao.en_US
dc.format.extent20 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleEfficient exploration of reinforcement learning in non-stationary environments with more complex state dynamicsen_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192548055en_US
dc.description.collectionM.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:56:03Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record