Learning in Auctions: Regret is Hard, Envy is Easy
Author(s)
Syrgkanis, Vasilis; Daskalakis, Konstantinos
DownloadLearning in auctions.pdf (476.3Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
An extensive body of recent work studies the welfare guarantees of simple and prevalent combinatorial auction formats, such as selling m items via simultaneous second price auctions (SiSPAs) [1], [2], [3]. These guarantees hold even when the auctions are repeatedly executed and the players use no-regret learning algorithms to choose their actions. Unfortunately, off-the-shelf no-regret learning algorithms for these auctions are computationally inefficient as the number of actions available to the players becomes exponential. We show that this obstacle is inevitable: there are no polynomial-time no-regret learning algorithms for SiSPAs, unless RP ⊇ NP, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have valuations from the broad class of fractionally subadditive (XOS) valuations (assuming demand oracle access to the valuations) or coverage valuations (even without demand oracles). No-envy learning outcomes are a relaxation of no-regret learning outcomes, which maintain their approximate welfare optimality while endowing them with computational tractability. Our positive and negative results extend to several auction formats that have been studied in the literature via the smoothness paradigm. Our positive results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts and states of nature are both infinite, and the payoff function of the learner is non-linear. We show that this algorithm has applications outside of auction settings, establishing significant gains in a recent application of no-regret learning in security games. Our efficient learning result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.
Date issued
2016-10Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Daskalakis, Constantinos, and Vasilis Syrgkanis. “Learning in Auctions: Regret Is Hard, Envy Is Easy.” 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) (October 2016).
Version: Original manuscript
ISBN
978-1-5090-3933-3