Random Sequential Encoders for Private Data Release in NLP
Author(s)
Jaba, Andrea
DownloadThesis PDF (2.042Mb)
Advisor
Medard, Muriel
Esfahanizadeh, Homa
Terms of use
Metadata
Show full item recordAbstract
There are many scenarios that motivate data owners to outsource the training of machine learning models on their data to external model developers. While doing so, it is of data owners’ best interests to keep their data private - meaning that no third party, including the model developer, can learn anything more about their data than the labels associated with the machine learning task, which is difficult to guarantee while maintaining the model utility of said task. In computer vision, lightweight random convolutional networks have shown potential to be an encoder that balances privacy and utility. This thesis takes a novel exploration of random sequential encoders - (1) random recurrent neural networks and (2) random long short-term memory networks as encoding schemes for private data release in natural language processing. Experiments were conducted to evaluate the utility and privacy of these encoders against known baseline encoding schemes with less privacy: (1) not using an encoder and (2) random linear encoder. For the private release of a spam classification dataset, the usage of random long short-term memory networks as encoders maintained the most utility among all random encoders, while being relatively robust to the privacy attacks this thesis considers, and signals a promising direction for future experiments.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology