Local differential privacy in decentralized optimization
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
MetadataShow full item record
Privacy concerns with sensitive data are receiving increasing attention. In this thesis, we study local differential privacy (LDP) in interactive decentralized optimization. Comparing to central differential privacy (DP), where a centralized curator maintains the dataset, LDP is a stronger notion yet with industrial adoption, which allows data of an individual to be privatized before sharing. Consequently, more challenges are encountered to build efficient statistical analyzer in LDP setting. Towards practical decentralized optimization in LDP, we extend LDP into a more comprehensive notion which provides both worst and average case privacy guarantees. Accordingly, two approaches to sharpen utility-privacy tradeoff are proposed for the worst and the average, respectively: First, cryptographically incorporated with merely linear secret sharing, we show the privacy guarantee can be improved by a factor of [square root of] N' where N' amongst all N agents are semi-honest. Second, we take Alternating Direction Method of Multipliers (ADMM), and decentralized (stochastic) gradient descent(D(S)GD) as two concrete examples to propose a framework of first-order based optimization with random local aggregators. We prove such local randomization lead to the same utility guarantee but amplify average LDP by a constant, empirically around 30%. Thorough experiments support our theory.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 79-83).
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Electrical Engineering and Computer Science.