Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/143464.2

Show simple item record

dc.contributor.authorBrustle, Johannes
dc.contributor.authorCai, Yang
dc.contributor.authorDaskalakis, Constantinos
dc.date.accessioned2022-06-17T16:11:31Z
dc.date.available2022-06-17T16:11:31Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/143464
dc.description.abstract© 2020 ACM. We study the sample complexity of learning revenue-optimal multi-item auctions. We obtain the first set of positive results that go beyond the standard but unrealistic setting of item-independence. In particular, we consider settings where bidders' valuations are drawn from correlated distributions that can be captured by Markov Random Fields or Bayesian Networks - two of the most prominent graphical models. We establish parametrized sample complexity bounds for learning an up-to-ϵ optimal mechanism in both models, which scale polynomially in the size of the model, i.e. the number of items and bidders, and only exponential in the natural complexity measure of the model, namely either the largest in-degree (for Bayesian Networks) or the size of the largest hyper-edge (for Markov Random Fields). We obtain our learnability results through a novel and modular framework that involves first proving a robustness theorem. We show that, given only "approximate distributions" for bidder valuations, we can learn a mechanism whose revenue is nearly optimal simultaneously for all "true distributions" that are close to the ones we were given in Prokhorov distance. Thus, to learn a good mechanism, it suffices to learn approximate distributions. When item values are independent, learning in Prokhorov distance is immediate, hence our framework directly implies the main result of Gonczarowski and Weinberg[36]. When item values are sampled from more general graphical models, we combine our robustness theorem with novel sample complexity results for learning Markov Random Fields or Bayesian Networks in Prokhorov distance, which may be of independent interest. Finally, in the single-item case, our robustness result can be strengthened to hold under an even weaker distribution distance, the Levy distance.en_US
dc.language.isoen
dc.publisherACMen_US
dc.relation.isversionof10.1145/3391403.3399541en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleMulti-Item Mechanisms without Item-Independence: Learnability via Robustnessen_US
dc.typeArticleen_US
dc.identifier.citationBrustle, Johannes, Cai, Yang and Daskalakis, Constantinos. 2020. "Multi-Item Mechanisms without Item-Independence: Learnability via Robustness." EC 2020 - Proceedings of the 21st ACM Conference on Economics and Computation.
dc.relation.journalEC 2020 - Proceedings of the 21st ACM Conference on Economics and Computationen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2022-06-17T16:01:06Z
dspace.orderedauthorsBrustle, J; Cai, Y; Daskalakis, Cen_US
dspace.date.submission2022-06-17T16:01:07Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version