Show simple item record

dc.contributor.authorRusmevichientong, Paat
dc.contributor.authorMersereau, Adam J.
dc.contributor.authorTsitsiklis, John N.
dc.date.accessioned2010-05-19T19:34:52Z
dc.date.available2010-05-19T19:34:52Z
dc.date.issued2009-12
dc.date.submitted2009-03
dc.identifier.issn0018-9286
dc.identifier.urihttp://hdl.handle.net/1721.1/54813
dc.description.abstractWe consider a multiarmed bandit problem where the expected reward of each arm is a linear function of an unknown scalar with a prior distribution. The objective is to choose a sequence of arms that maximizes the expected total (or discounted total) reward. We demonstrate the effectiveness of a greedy policy that takes advantage of the known statistical correlation structure among the arms. In the infinite horizon discounted reward setting, we show that the greedy and optimal policies eventually coincide, and both settle on the best arm. This is in contrast with the Incomplete Learning Theorem for the case of independent arms. In the total reward setting, we show that the cumulative Bayes risk after T periods under the greedy policy is at most O(logT), which is smaller than the lower bound of Omega(log[superscript 2] T) established by Lai for a general, but different, class of bandit problems. We also establish the tightness of our bounds. Theoretical and numerical results show that the performance of our policy scales independently of the number of arms.en
dc.description.sponsorshipNational Science Foundation (Grants DMS-0732196, CMMI-0746844, and ECCS-0701623)en
dc.description.sponsorshipKenan-Flagler Business Schoolen
dc.description.sponsorshipUniversity of Chicago. Graduate School of Businessen
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineersen
dc.relation.isversionofhttp://dx.doi.org/10.1109/tac.2009.2031725en
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en
dc.sourceIEEEen
dc.subjectMarkov decision process (MDP)en
dc.titleA Structured Multiarmed Bandit Problem and the Greedy Policyen
dc.typeArticleen
dc.identifier.citationMersereau, A.J., P. Rusmevichientong, and J.N. Tsitsiklis. “A Structured Multiarmed Bandit Problem and the Greedy Policy.” Automatic Control, IEEE Transactions on 54.12 (2009): 2787-2802. © 2009 Institute of Electrical and Electronics Engineers.en
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.approverTsitsiklis, John N.
dc.contributor.mitauthorTsitsiklis, John N.
dc.relation.journalIEEE Transactions on Automatic Controlen
dc.eprint.versionFinal published versionen
dc.type.urihttp://purl.org/eprint/type/JournalArticleen
eprint.statushttp://purl.org/eprint/status/PeerRevieweden
dspace.orderedauthorsMersereau, A.J.; Rusmevichientong, P.; Tsitsiklis, J.N.en
dc.identifier.orcidhttps://orcid.org/0000-0003-2658-8239
mit.licensePUBLISHER_POLICYen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record