MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Local differential privacy in decentralized optimization

Author(s)
Xiao, Hanshen.
Thumbnail
Download1142812131-MIT.pdf (3.706Mb)
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Srini Devadas.
Terms of use
MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Privacy concerns with sensitive data are receiving increasing attention. In this thesis, we study local differential privacy (LDP) in interactive decentralized optimization. Comparing to central differential privacy (DP), where a centralized curator maintains the dataset, LDP is a stronger notion yet with industrial adoption, which allows data of an individual to be privatized before sharing. Consequently, more challenges are encountered to build efficient statistical analyzer in LDP setting. Towards practical decentralized optimization in LDP, we extend LDP into a more comprehensive notion which provides both worst and average case privacy guarantees. Accordingly, two approaches to sharpen utility-privacy tradeoff are proposed for the worst and the average, respectively: First, cryptographically incorporated with merely linear secret sharing, we show the privacy guarantee can be improved by a factor of [square root of] N' where N' amongst all N agents are semi-honest. Second, we take Alternating Direction Method of Multipliers (ADMM), and decentralized (stochastic) gradient descent(D(S)GD) as two concrete examples to propose a framework of first-order based optimization with random local aggregators. We prove such local randomization lead to the same utility guarantee but amplify average LDP by a constant, empirically around 30%. Thorough experiments support our theory.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
 
Cataloged from PDF version of thesis.
 
Includes bibliographical references (pages 79-83).
 
Date issued
2019
URI
https://hdl.handle.net/1721.1/124107
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.