dc.contributor.advisor | Han, Song | |
dc.contributor.author | Lin, Ji | |
dc.date.accessioned | 2024-03-21T19:09:19Z | |
dc.date.available | 2024-03-21T19:09:19Z | |
dc.date.issued | 2024-02 | |
dc.date.submitted | 2024-02-21T17:18:52.793Z | |
dc.identifier.uri | https://hdl.handle.net/1721.1/153837 | |
dc.description.abstract | Deep learning has prevailed in various fields and fundamentally changed human society. Efficiency is the key factor in democratizing deep learning and broadening its applications. It is increasingly important as Moore’s law slows down while the model size scaling speeds up. We need efficient algorithms and systems to help us bridge the gap.
In this thesis, we will discuss techniques to improve the efficiency of deep learning by removing redundancies. We study efficient deep learning computing at the two extremes of scaling: tiny machine learning (TinyML) and large language models (LLMs). TinyML aims to run deep learning models on low-power IoT devices with tight memory constraints. Weexplored a system-algorithm co-design approach to remove redundant memory usage and enable real-life applications on commercial microcontrollers, achieving a milestone ImageNet accuracy of 70% for the first time. We further extend the solution from inference to training and enable on-device learning under only 256KB of memory. Similar to TinyML, the gigantic model sizes of LLMs also exceed the hardware capability even for the most advanced GPUs. We developed post-training quantization schemes for different serving workloads to reduce redundant bits of weights and activations, enabling W8A8 quantization (SmoothQuant) for compute-bound inference and W4A16 quantization (AWQ) for memorybound. We further develop TinyChat, an efficient and Python-native serving system, to realize the speedup from quantization. Finally, we will discuss some domain-specific optimization opportunities, including efficient video recognition with Temporal Shift Module (TSM) and image generation with Anycost GANs, where we reduce application-specific redundancies with specialized model designs. | |
dc.publisher | Massachusetts Institute of Technology | |
dc.rights | In Copyright - Educational Use Permitted | |
dc.rights | Copyright retained by author(s) | |
dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
dc.title | Efficient Deep Learning Computing: From TinyML to LargeLM | |
dc.type | Thesis | |
dc.description.degree | Ph.D. | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.orcid | https://orcid.org/0000-0001-6053-4344 | |
mit.thesis.degree | Doctoral | |
thesis.degree.name | Doctor of Philosophy | |