Show simple item record

dc.contributor.authorHuggins, Jonathan H.
dc.contributor.authorTenenbaum, Joshua B
dc.date.accessioned2017-12-14T15:44:11Z
dc.date.available2017-12-14T15:44:11Z
dc.date.issued2015-07
dc.identifier.issn1532-4435
dc.identifier.issn1533-7928
dc.identifier.urihttp://hdl.handle.net/1721.1/112754
dc.description.abstractCommon statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater "robustness" and the ability to "share statistical strength." Yet it is an ongoing challenge to provide a learning-theoretically sound formalism of such notions that: offers practical guidance concerning when and how best to utilize hierarchical models; provides insights into what makes for a good hierarchical prior; and, when the form of the prior has been chosen, can guide the choice of hyperparameter settings. We present a set of analytical tools for understanding hierarchical priors in both the online and batch learning settings. We provide regret bounds under log-loss, which show how certain hierarchical models compare, in retrospect, to the best single model in the model class. We also show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound for any bounded loss, a result which may be of independent interest. Risk and regret bounds for Student's t and hierarchical Gaussian priors allow us to formalize the concepts of "robustness" and "sharing statistical strength." Priors for feature selection are investigated as well. Our results suggest that the learning-theoretic benefits of using hierarchical priors can often come at little cost on practical problems.en_US
dc.publisherJournal of Machine Learning Research/Microtome Publishingen_US
dc.relation.isversionofhttps://dl.acm.org/citation.cfm?id=3045272en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleRisk and regret of hierarchical Bayesian learnersen_US
dc.typeArticleen_US
dc.identifier.citationHuggins, Jonathan H. and Tenenbaum, Joshua B. "Risk and regret of hierarchical Bayesian learners." Proceedings of the 32nd International Conference on Machine Learning (ICML 2015), July 6-11 2015, Lille, France, Journal of Machine Learning Research/Microtome Publishing, 2015 © 2015 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorHuggins, Jonathan H.
dc.contributor.mitauthorTenenbaum, Joshua B
dc.relation.journalProceedings of the 32nd International Conference on Machine Learning (ICML 2015)en_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2017-12-08T17:31:21Z
dspace.orderedauthorsHuggins, Jonathan H.; Tenenbaum, Joshua B.en_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-9256-6727
dc.identifier.orcidhttps://orcid.org/0000-0002-1925-2035
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record