Understanding Bonus-Based Exploration in Reinforcement Learning
Author(s)
Chen, Eric
DownloadThesis PDF (10.24Mb)
Advisor
Agrawal, Pulkit
Terms of use
Metadata
Show full item recordAbstract
Intrinsic reward-based exploration methods have successfully solved challenging sparse reward tasks such as Montezuma’s Revenge. However, these methods have not been widely adopted in reinforcement learning due to inconsistent performance gains across tasks. To better understand the underlying cause of this variability, we evaluate the performance of three major families of exploration methods on a suite of custom environments and video games: prediction error, state visitation and model uncertainty. Our custom environments allow us to study the effect of different environmental features in isolation. Our results reveal that exploration methods can be biased by spurious features such as color, and prioritize different dynamics in specific environments. In particular, we find that prediction-based methods are superior at solving tasks involving controllable dynamics. Furthermore, we find that partial observability can hinder exploration by setting up "curiosity traps" that agents can fall into. Finally, we investigate how various implementation details such as reward design and generation affect an agent’s overall performance.
Date issued
2021-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology