MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
  • DSpace@MIT Home
  • MIT Open Access Articles
  • MIT Open Access Articles
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Output-weighted sampling for multi-armed bandits with extreme payoffs

Author(s)
Yang, Yibo; Blanchard, Antoine; Sapsis, Themistoklis; Perdikaris, Paris
Thumbnail
Downloadyang-et-al-2022-output-weighted-sampling-for-multi-armed-bandits-with-extreme-payoffs.pdf (1.666Mb)
Publisher Policy

Publisher Policy

Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.

Terms of use
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Metadata
Show full item record
Abstract
We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides exploration towards the bandits that are deemed most relevant according to the variability of the observed rewards. This is achieved by computing a tractable likelihood ratio that quantifies the importance of the output relative to the inputs and essentially acts as an<jats:italic>attention mechanism</jats:italic>that promotes exploration of extreme rewards. Our formulation is supported by asymptotic zero-regret guarantees, and its performance is demonstrated across several synthetic benchmarks, as well as two realistic examples involving noisy sensor network data. Finally, we provide a JAX library for efficient bandit optimization using Gaussian processes.
Date issued
2022-04
URI
https://hdl.handle.net/1721.1/154219
Department
Massachusetts Institute of Technology. Department of Mechanical Engineering
Journal
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
Publisher
The Royal Society
Citation
Yang Yibo, Blanchard Antoine, Sapsis Themistoklis and Perdikaris Paris 2022Output-weighted sampling for multi-armed bandits with extreme payoffsProc. R. Soc. A.47820210781.
Version: Final published version
ISSN
1364-5021
1471-2946
Keywords
General Physics and Astronomy, General Engineering, General Mathematics

Collections
  • MIT Open Access Articles

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.