BURLAP: Bits of Useful Randomness enable Learning with Adjustable Privacy
Author(s)
Reyes, Rene David Reyes
DownloadThesis PDF (633.3Kb)
Advisor
Kagal, Lalana
Terms of use
Metadata
Show full item recordAbstract
Training accurate models over sensitive data that is distributed among multiple users is an important problem in Machine Learning (ML). Good solutions would open the door to the use of these powerful algorithms in high-impact domains such as healthcare, finance and policy.
While cryptography-based approaches such as Fully Homomorphic Encryption (FHE) can be used to provide privacy guarantees that have been rigorously characterized and proven, their adoption comes with two main practical hurdles. First, these tools often incur a significant computational overhead that does not scale to the size of state-of-the-art models. Second, they use advanced mathematical concepts that are unfamiliar to most ML practitioners, causing a very steep learning curve.
The first challenge is a major research question that is being addressed by many cryptographers and engineers. On the theoretical side, there is a quest for better algorithms and constructions. On the practical side, significant effort is being put into performance engineering and hardware acceleration.
In this work, we focus on the second hurdle and design BURLAP, a detailed protocol that combines cryptographic tools with various ML techniques to provide a secure training framework. We provide a proof-of-concept implementation to show that this system can be realized with existing tools, but find that it does not yet scale to the distributed ML setting we are interested in. Nonetheless, given other ongoing efforts to make FHE more practical, we believe BURLAP is a significant conceptual step towards bridging the gap between cryptography and ML.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology