Linearly parameterized bandits
Author(s)
Tsitsiklis, John N.; Rusmevichientong, Paat
DownloadP-09-linearBandit-fin.pdf (311.5Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
We consider bandit problems involving a large (possibly infinite) collection of arms, in which the expected reward of each arm is a linear function of an r-dimensional random vector Z ∈ ℝ(superscript r), where r ≥ 2. The objective is to minimize the cumulative regret and Bayes risk. When the set of arms corresponds to the unit sphere, we prove that the regret and Bayes risk is of order Θ(r √T), by establishing a lower bound for an arbitrary policy, and showing that a matching upper bound is obtained through a policy that alternates between exploration and exploitation phases. The phase-based policy is also shown to be effective if the set of arms satisfies a strong convexity condition. For the case of a general set of arms, we describe a near-optimal policy whose regret and Bayes risk admit upper bounds of the form O(r √T log(superscript 3/2)T).
Date issued
2010-01Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Mathematics of Operations Research
Publisher
INFORMS
Citation
Rusmevichientong, P., and J. N. Tsitsiklis. “Linearly Parameterized Bandits.” Mathematics of Operations Research 35.2 (2010): 395-411.
Version: Author's final manuscript
ISSN
0364-765X
1526-5471