Private and Provably Efficient Federated Decision-Making
Author(s)
Dubey, Abhimanyu
DownloadThesis PDF (4.889Mb)
Advisor
Pentland, Alex P.
Terms of use
Metadata
Show full item recordAbstract
In this thesis, we study sequential multi-armed bandit and reinforcement learning in the federated setting, where a group of agents collaborates to improve their collective reward by communicating over a network.
We first study the multi-armed bandit problem in a decentralized environment. We study federated bandit learning under several real-world environmental constraints, such as differentially private communication, heavy-tailed perturbations, and the presence of adversarial corruptions. For each of these constraints, we present algorithms with near-optimal regret guarantees and maintain competitive experimental performance on real-world networks. We characterize the asymptotic and minimax rates for these problems via network-dependent lower bounds as well. These algorithms provide substantial improvements over existing work in a variety of real-world and synthetic network topologies.
Next, we study the contextual bandit problem in a federated learning setting with differential privacy. In this setting, we propose algorithms that match the optimal rate (up to poly-logarithmic terms) with only a logarithmic communication budget. We extend our approach to heterogeneous federated learning via a kernel-based approach, and also provide a no-regret algorithm for private Gaussian process bandit optimization.
Finally, we study reinforcement learning in both the multi-agent and federated setting with linear function approximation. We propose variants of least-squares value iteration algorithms that are provably no-regret with only a constant communication budget.
We believe that the future of machine learning entails large-scale cooperation between various data-driven entities, and this work will be beneficial to the development of reliable, scalable, and secure decision-making systems.
Date issued
2022-02Department
Program in Media Arts and Sciences (Massachusetts Institute of Technology)Publisher
Massachusetts Institute of Technology