Finding Sparse Subnetworks in Self-Supervised Speech Recognition and Speech Synthesis
Author(s)
Lai, Cheng-I Jeff
DownloadThesis PDF (4.044Mb)
Advisor
Glass, James R.
Terms of use
Metadata
Show full item recordAbstract
The modern paradigm in speech processing has demonstrated the importance of scale and compute for end-to-end speech recognition and synthesis. For instance, state-of-the-art self-supervised speech representation learning models typically consists of more than 300M model parameters and being trained on 24 GPUs. While such a paradigm has proven to be effective in certain offline settings, it remains unclear the extent to which it can be extended to online and small-device scenarios.
This thesis is a step toward making advanced speech processing models more parameter-efficient. We aim to answer the following: do sparse subnetworks exist in modern speech processing models, and if so, how can we discover them efficiently? The key contribution is a new pruning algorithm termed Prune-Adjust-Re-Prune (PARP), that discovers sparse subnetworks efficiently. PARP is inspired by our observation that subnetworks pruned for pre-training tasks need merely a slight adjustment to achieve a sizeable performance boost in downstream ASR tasks. We first demonstrate its effectiveness for self-supervised ASR in various low-resource settings. In particular, extensive experiments verify (1) sparse subnetworks exist in mono-lingual/multi- lingual pre-trained self-supervised learning representations, and (2) the computational advantage and performance gain of PARP over baseline pruning methods.
In the second study, we extend PARP to end-to-end TTS, including both spectrogram prediction networks and vocoders. We thoroughly investigate the tradeoffs between sparsity and its subsequent effects on synthetic speech. The findings suggest that not only are end-to-end TTS models highly prunable, but also, perhaps surprisingly, pruned TTS models can produce synthetic speech with equal or higher naturalness and intelligibility, with similar prosody.
Date issued
2022-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology