Efficient exploration of reinforcement learning in non-stationary environments with more complex state dynamics
Author(s)
Hao, Parker(Parker Ruochen)
Download1192548055-MIT.pdf (756.2Kb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Paul F. Mende.
Terms of use
Metadata
Show full item recordAbstract
Exploration technique is the key to reach optimal results via reinforcement learning in a time-ecient manner. When reinforcement learning was first proposed, exploration was implemented as randomly choosing across the action space, resulting in potentially exponential number of state-action pairs to explore from. Over the years, more ecient exploration techniques were proposed, allowing faster convergence and delivering better results across different domains of applications. With the growing interest in non-stationary environments, some of those exploration techniques are explored where the optimal state-action changes across dierent periods of learning process. In the past, those techniques have performed well in control setups where the targets are non-stationary and continuously moving. However, such techniques have not been extensively tested in environments involving jumps or non-continuous regime changes. This paper analyzes methods for achieving comparable exploration performance under such challenging environments and proposes new techniques for the agent to capture the regime changes of non-stationary environments as more complex states or intrinsic rewards.
Description
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 Cataloged from the official PDF of thesis. Includes bibliographical references (pages 19-20).
Date issued
2020Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.