Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications
Author(s)
Liu, Zhijian
DownloadThesis PDF (44.53Mb)
Advisor
Han, Song
Terms of use
Metadata
Show full item recordAbstract
Deep learning has been used across a broad spectrum of applications, including computer vision, natural language processing, and scientific discovery. However, behind its remarkable performance lies an increasing gap between the demand for and supply of computation. On the demand side, the computational costs of deep neural networks have surged dramatically, driven by ever-larger input and model sizes. On the supply side, as Moore's Law slows down, hardware no longer delivers increasing performance within the same power budget.
In this dissertation, we present our solutions across the algorithm, system, and application stacks to address the demand-supply gap through the lens of sparsity. In Part I, we first develop algorithms, SparseViT and SparseRefine, which identify sparsity within dense input data. We then introduce new sparse primitives, PVCNN and FlatFormer, to efficiently process inputs with sparsity. In Part II, we introduce system libraries, TorchSparse, to optimize existing sparse primitives and effectively translate theoretical savings from sparsity into practical speedups on hardware. In Part III, we apply sparsity to accelerate a wide range of computation-intensive AI applications, such as autonomous driving and language modeling. We conclude this dissertation with a vision towards building more efficient and accessible AI.
Date issued
2024-05Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology