Output-weighted sampling for multi-armed bandits with extreme payoffs
Author(s)
Yang, Yibo; Blanchard, Antoine; Sapsis, Themistoklis; Perdikaris, Paris
Downloadyang-et-al-2022-output-weighted-sampling-for-multi-armed-bandits-with-extreme-payoffs.pdf (1.666Mb)
Publisher Policy
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides exploration towards the bandits that are deemed most relevant according to the variability of the observed rewards. This is achieved by computing a tractable likelihood ratio that quantifies the importance of the output relative to the inputs and essentially acts as an<jats:italic>attention mechanism</jats:italic>that promotes exploration of extreme rewards. Our formulation is supported by asymptotic zero-regret guarantees, and its performance is demonstrated across several synthetic benchmarks, as well as two realistic examples involving noisy sensor network data. Finally, we provide a JAX library for efficient bandit optimization using Gaussian processes.
Date issued
2022-04Department
Massachusetts Institute of Technology. Department of Mechanical EngineeringJournal
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
Publisher
The Royal Society
Citation
Yang Yibo, Blanchard Antoine, Sapsis Themistoklis and Perdikaris Paris 2022Output-weighted sampling for multi-armed bandits with extreme payoffsProc. R. Soc. A.47820210781.
Version: Final published version
ISSN
1364-5021
1471-2946
Keywords
General Physics and Astronomy, General Engineering, General Mathematics