| dc.contributor.author | Yang, Yibo | |
| dc.contributor.author | Blanchard, Antoine | |
| dc.contributor.author | Sapsis, Themistoklis | |
| dc.contributor.author | Perdikaris, Paris | |
| dc.date.accessioned | 2024-04-18T20:50:22Z | |
| dc.date.available | 2024-04-18T20:50:22Z | |
| dc.date.issued | 2022-04 | |
| dc.identifier.issn | 1364-5021 | |
| dc.identifier.issn | 1471-2946 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/154219 | |
| dc.description.abstract | We present a new type of acquisition function for online decision-making in multi-armed and contextual bandit problems with extreme payoffs. Specifically, we model the payoff function as a Gaussian process and formulate a novel type of upper confidence bound acquisition function that guides exploration towards the bandits that are deemed most relevant according to the variability of the observed rewards. This is achieved by computing a tractable likelihood ratio that quantifies the importance of the output relative to the inputs and essentially acts as an<jats:italic>attention mechanism</jats:italic>that promotes exploration of extreme rewards. Our formulation is supported by asymptotic zero-regret guarantees, and its performance is demonstrated across several synthetic benchmarks, as well as two realistic examples involving noisy sensor network data. Finally, we provide a JAX library for efficient bandit optimization using Gaussian processes. | en_US |
| dc.language.iso | en | |
| dc.publisher | The Royal Society | en_US |
| dc.relation.isversionof | 10.1098/rspa.2021.0781 | en_US |
| dc.rights | Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. | en_US |
| dc.source | The Royal Society | en_US |
| dc.subject | General Physics and Astronomy | en_US |
| dc.subject | General Engineering | en_US |
| dc.subject | General Mathematics | en_US |
| dc.title | Output-weighted sampling for multi-armed bandits with extreme payoffs | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Yang Yibo, Blanchard Antoine, Sapsis Themistoklis and Perdikaris Paris 2022Output-weighted sampling for multi-armed bandits with extreme payoffsProc. R. Soc. A.47820210781. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Mechanical Engineering | |
| dc.relation.journal | Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences | en_US |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2024-04-18T20:37:28Z | |
| dspace.orderedauthors | Yang, Y; Blanchard, A; Sapsis, T; Perdikaris, P | en_US |
| dspace.date.submission | 2024-04-18T20:37:30Z | |
| mit.journal.volume | 478 | en_US |
| mit.journal.issue | 2260 | en_US |
| mit.license | PUBLISHER_POLICY | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |