On measure concentration of random maximum a-posteriori perturbations
Author(s)
Orabona, Francesco; Hazan, Tamir; Sarwate, Anand D.; Jaakkola, Tommi S.
DownloadJaakkola_On measure.pdf (344.8Kb)
PUBLISHER_POLICY
Publisher Policy
Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.
Terms of use
Metadata
Show full item recordAbstract
The maximum a-posteriori (MAP) perturbation framework has emerged as a useful approach for inference and learning in high dimensional complex models. By maximizing a randomly perturbed potential function, MAP perturbations generate unbiased samples from the Gibbs distribution. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. More efficient algorithms use sequential sampling strategies based on the expected value of low dimensional MAP perturbations. This paper develops new measure concentration inequalities that bound the number of samples needed to estimate such expected values. Applying the general result to MAP perturbations can yield a more efficient algorithm to approximate sampling from the Gibbs distribution. The measure concentration result is of general interest and may be applicable to other areas involving Monte Carlo estimation of expectations.
Date issued
2014Department
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory; Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Journal of Machine Learning Research
Publisher
Association for Computing Machinery (ACM)
Citation
Orabona, Francesco, Tamir Hazan, Anand D. Sarwate, and Tommi S. Jaakkola. "On measure concentration of random maximum a-posteriori perturbations." Journal of Machine Learning Research, Volume 32: Proceedings of The 31st International Conference on Machine Learning (2014).
Version: Final published version
ISSN
1532-4435
1533-7928