Show simple item record

dc.contributor.authorCampbell, Trevor David
dc.contributor.authorBroderick, Tamara A
dc.date.accessioned2021-01-12T20:13:52Z
dc.date.available2021-01-12T20:13:52Z
dc.date.issued2019-02
dc.date.submitted2019-02
dc.identifier.issn1533-7928
dc.identifier.issn1532-4435
dc.identifier.urihttps://hdl.handle.net/1721.1/129387
dc.description.abstractThe automation of posterior inference in Bayesian data analysis has enabled experts and nonexperts alike to use more sophisticated models, engage in faster exploratory modeling and analysis, and ensure experimental reproducibility. However, standard automated posterior inference algorithms are not tractable at the scale of massive modern data sets, and modifications to make them so are typically model-specific, require expert tuning, and can break theoretical guarantees on inferential quality. Building on the Bayesian coresets framework, this work instead takes advantage of data redundancy to shrink the data set itself as a preprocessing step, providing fully-automated, scalable Bayesian inference with theoretical guarantees. We begin with an intuitive reformulation of Bayesian coreset construction as sparse vector sum approximation, and demonstrate that its automation and performance-based shortcomings arise from the use of the supremum norm. To address these shortcomings we develop Hilbert coresets, i.e., Bayesian coresets constructed under a norm induced by an inner-product on the log-likelihood function space. We propose two Hilbert coreset construction algorithms|one based on importance sampling, and one based on the Frank-Wolfe algorithm|along with theoretical guarantees on approximation quality as a function of coreset size. Since the exact computation of the proposed inner-products is model-specific, we automate the construction with a random finite-dimensional projection of the log-likelihood functions. The resulting automated coreset construction algorithm is simple to implement, and experiments on a variety of models with real and synthetic data sets show that it provides high-quality posterior approximations and a significant reduction in the computational cost of inference.en_US
dc.language.isoen
dc.publisherMIT Pressen_US
dc.relation.isversionofhttps://jmlr.org/papers/v20/17-613.htmlen_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceJournal of Machine Learning Researchen_US
dc.titleAutomated Scalable Bayesian Inference via Hilbert Coresetsen_US
dc.typeArticleen_US
dc.identifier.citationCampbell, Trevor and Tamara Broderick. “Automated Scalable Bayesian Inference via Hilbert Coresets.” Journal of Machine Learning Research, 20 (February 2019): 1-38 © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalJournal of Machine Learning Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-12-03T18:28:28Z
dspace.orderedauthorsCampbell, T; Broderick, Ten_US
dspace.date.submission2020-12-03T18:28:34Z
mit.journal.volume20en_US
mit.licensePUBLISHER_CC
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record