Regulating exploration in multi-armed bandit problems with time patterns and dying arms
Massachusetts Institute of Technology. Operations Research Center.
Roy E. Welsch.
MetadataShow full item record
In retail, there are predictable yet dramatic time-dependent patterns in customer behavior, such as periodic changes in the number of visitors, or increases in customers just before major holidays. The standard paradigm of multi-armed bandit analysis does not take these known patterns into account. This means that for applications in retail, where prices are fixed for periods of time, current bandit algorithms will not suffice. This work provides a framework and methods that take the time-dependent patterns into account. In the corrected methods, exploitation (greed) is regulated over time, so that more exploitation occurs during higher reward periods, and more exploration occurs in periods of low reward. In order to understand why regret is reduced with the corrected methods, a set of bounds on the expected regret are presented and insights into why we would want to exploit during periods of high reward are discussed. When the set of available options changes over time, mortal bandits algorithms have proven to be extremely useful in a number of settings, for example, for providing news article recommendations, or running automated online advertising campaigns. Previous work on this problem showed how to regulate exploration of new arms when they have recently appeared, but they do not adapt when the arms are about to disappear. Since in most applications we can determine either exactly or approximately when arms will disappear, we can leverage this information to improve performance: we should not be exploring arms that are about to disappear. Also for this framework, adaptations of algorithms and regret bounds are provided. The proposed methods perform well in experiments, and were inspired by a high-scoring entry in the Exploration and Exploitation 3 contest using data from Yahoo! Front Page. That entry heavily used time-series methods to regulate greed over time, which was substantially more effective than other contextual bandit methods.
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 65-70).
DepartmentMassachusetts Institute of Technology. Operations Research Center.
Massachusetts Institute of Technology
Operations Research Center.