Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions
Author(s)
Hazan, Tamir; Maji, Subhransu; Keshet, Joseph; Jaakkola, Tommi S.
DownloadJaakkola_Learning efficient.pdf (354.3Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
In this work we develop efficient methods for learning random MAP predictors for structured label problems. In particular, we construct posterior distributions over perturbations that can be adjusted via stochastic gradient methods. We show that every smooth posterior distribution would suffice to define a smooth PAC-Bayesian risk bound suitable for gradient methods. In addition, we relate the posterior distributions to computational properties of the MAP predictors. We suggest multiplicative posteriors to learn super-modular potential functions that accompany specialized MAP predictors such as graph-cuts. We also describe label-augmented posterior models that can use efficient MAP approximations, such as those arising from linear program relaxations.
Date issued
2013Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Advances in Neural Information Processing Systems (NIPS)
Publisher
Neural Information Processing Systems
Citation
Hazan, Tamir, Subhransu Maji, Joseph Keshet, and Tommi Jaakkola. "Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions." Advances in Neural Information Processing Systems (NIPS 2013).
Version: Final published version
ISSN
1049-5258