Approximate Decentralized Bayesian Inference
Author(s)
Campbell, Trevor David; How, Jonathan P.
DownloadHow_Approximate decentralized.pdf (548.9Kb)
OPEN_ACCESS_POLICY
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
This paper presents an approximate method for performing Bayesian inference in models with conditional independence over a decentralized network of learning agents. The method first employs variational inference on each individual learning agent to generate a local approximate posterior, the agents transmit their local posteriors to other agents in the network, and finally each agent combines its set of received local posteriors. The key insight in this work is that, for many Bayesian models, approximate inference schemes destroy symmetry and dependencies in the model that are crucial to the correct application of Bayes’ rule when combining the local posteriors. The proposed method addresses this issue by including an additional optimization step in the combination procedure that accounts for these broken dependencies. Experiments on synthetic and real data demonstrate that the decentralized method provides advantages in computational performance and predictive test likelihood over previous batch and distributed methods.
Description
URL to accepted papers on conference site
Date issued
2014-07Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, UAI 2014
Publisher
Association for Uncertainty in Artificial Intelligence Press
Citation
Campbell, Trevor and Jonathan How. "Approximate Decentralized Bayesian Inference." 30th Conference on Uncertainty in Artificial Intelligence, UAI 2014, Quebec City, Quebec, Canada, July 23-27, 2014. p.1-10.
Version: Author's final manuscript
Other identifiers
ID: 182