| dc.contributor.advisor | Paul F. Mende. | en_US |
| dc.contributor.author | Hao, Parker(Parker Ruochen) | en_US |
| dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
| dc.date.accessioned | 2020-09-15T21:56:03Z | |
| dc.date.available | 2020-09-15T21:56:03Z | |
| dc.date.copyright | 2020 | en_US |
| dc.date.issued | 2020 | en_US |
| dc.identifier.uri | https://hdl.handle.net/1721.1/127403 | |
| dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 | en_US |
| dc.description | Cataloged from the official PDF of thesis. | en_US |
| dc.description | Includes bibliographical references (pages 19-20). | en_US |
| dc.description.abstract | Exploration technique is the key to reach optimal results via reinforcement learning in a time-ecient manner. When reinforcement learning was first proposed, exploration was implemented as randomly choosing across the action space, resulting in potentially exponential number of state-action pairs to explore from. Over the years, more ecient exploration techniques were proposed, allowing faster convergence and delivering better results across different domains of applications. With the growing interest in non-stationary environments, some of those exploration techniques are explored where the optimal state-action changes across dierent periods of learning process. In the past, those techniques have performed well in control setups where the targets are non-stationary and continuously moving. However, such techniques have not been extensively tested in environments involving jumps or non-continuous regime changes. This paper analyzes methods for achieving comparable exploration performance under such challenging environments and proposes new techniques for the agent to capture the regime changes of non-stationary environments as more complex states or intrinsic rewards. | en_US |
| dc.description.statementofresponsibility | by Parker Hao. | en_US |
| dc.format.extent | 20 pages | en_US |
| dc.language.iso | eng | en_US |
| dc.publisher | Massachusetts Institute of Technology | en_US |
| dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
| dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
| dc.subject | Electrical Engineering and Computer Science. | en_US |
| dc.title | Efficient exploration of reinforcement learning in non-stationary environments with more complex state dynamics | en_US |
| dc.type | Thesis | en_US |
| dc.description.degree | M. Eng. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.oclc | 1192548055 | en_US |
| dc.description.collection | M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
| dspace.imported | 2020-09-15T21:56:03Z | en_US |
| mit.thesis.degree | Master | en_US |
| mit.thesis.department | EECS | en_US |