MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Efficient Deep Learning Computing: From TinyML to LargeLM

Author(s)
Lin, Ji
Thumbnail
DownloadThesis PDF (29.20Mb)
Advisor
Han, Song
Terms of use
In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/
Metadata
Show full item record
Abstract
Deep learning has prevailed in various fields and fundamentally changed human society. Efficiency is the key factor in democratizing deep learning and broadening its applications. It is increasingly important as Moore’s law slows down while the model size scaling speeds up. We need efficient algorithms and systems to help us bridge the gap. In this thesis, we will discuss techniques to improve the efficiency of deep learning by removing redundancies. We study efficient deep learning computing at the two extremes of scaling: tiny machine learning (TinyML) and large language models (LLMs). TinyML aims to run deep learning models on low-power IoT devices with tight memory constraints. Weexplored a system-algorithm co-design approach to remove redundant memory usage and enable real-life applications on commercial microcontrollers, achieving a milestone ImageNet accuracy of 70% for the first time. We further extend the solution from inference to training and enable on-device learning under only 256KB of memory. Similar to TinyML, the gigantic model sizes of LLMs also exceed the hardware capability even for the most advanced GPUs. We developed post-training quantization schemes for different serving workloads to reduce redundant bits of weights and activations, enabling W8A8 quantization (SmoothQuant) for compute-bound inference and W4A16 quantization (AWQ) for memorybound. We further develop TinyChat, an efficient and Python-native serving system, to realize the speedup from quantization. Finally, we will discuss some domain-specific optimization opportunities, including efficient video recognition with Temporal Shift Module (TSM) and image generation with Anycost GANs, where we reduce application-specific redundancies with specialized model designs.
Date issued
2024-02
URI
https://hdl.handle.net/1721.1/153837
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology

Collections
  • Doctoral Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.