MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Mixed-precision NN accelerator with neural-hardware architecture search

Author(s)
Lin, Yujun,S. M.Massachusetts Institute of Technology.; Hafdi, Driss.
Thumbnail
Download1192486801-MIT.pdf (2.284Mb)
Alternative title
Mixed-precision neural network accelerator with neural-hardware architecture search
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Song Han.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Neural architecture and hardware architecture co-design is an effective way to enable specialization and acceleration for deep neural networks (DNNs). The design space and its exploration methodology impact efficiency and productivity. However, both architecture designs are challenging. We first propose a mixed-precision accelerator, a highly parameterized architecture that can adapt to different bit widths for different quantized layers with significantly reduced overhead. It efficiently provides a vast design space for both neural and hardware architecture. However, it is difficult to exhaust such an enormous design space by rule-based heuristics. To tackle this problem, we propose a machine learning based design and optimization methodology of a neural network accelerator. It includes the evolution strategy based hardware architecture search and one-shot HyperNet based quantized neural architecture search. Evaluated on existing DNN benchmarks, our mixed-precision accelerator achieves 11.7x, 1.5x speedup and 10.5x, 1.9x energy savings over Eyeriss [3] and BitFusion [35] respectively under the same area, frequency, and process technology. Our machine learning based co-design can compose highly matched neural-hardware architectures and further rival the best human-designed architectures by additional 1.3x speedup and 1.5x energy savings under the same ImageNet accuracy with better sample efficiency.
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
 
Cataloged from the official PDF of thesis. "Part of the work in this thesis was done in collaboration with another student, Driss Hafdi. The credit for the design and implementation of accelerator architecture in this thesis was shared by both of us"--Page 5 Disclaimer.
 
Includes bibliographical references (pages 61-65).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/127353
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.