MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
  • DSpace@MIT Home
  • MIT Libraries
  • MIT Theses
  • Graduate Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Efficient algorithms and hardware for Natural Language Processing

Author(s)
Wang, Hanrui,S.M.Massachusetts Institute of Technology.
Thumbnail
Download1192966271-MIT.pdf (5.064Mb)
Alternative title
Efficient algorithms and hardware for NLP
Other Contributors
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Advisor
Song Han.
Terms of use
MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. http://dspace.mit.edu/handle/1721.1/7582
Metadata
Show full item record
Abstract
Natural Language Processing (NLP) is essential for many real-world applications, such as machine translation and chatbots. Recently, NLP is witnessing rapid progresses driven by Transformer models with the attention mechanism. Though enjoying the high performance, Transformers are challenging to deploy due to the intensive computation. In this thesis, we present an algorithm-hardware co-design approach to enable efficient Transformer inference. On the algorithm side, we propose Hardware- Aware Transformer (HAT) framework to leverage Neural Architecture Search (NAS) to search for a specialized low-latency Transformer model for each hardware. We construct a large design space with the novel arbitrary encoder-decoder attention and heterogeneous layers. Then a SuperTransformer that covers all candidates in the design space is trained and efficiently produces many SubTransformers with weight sharing.
 
We perform an evolutionary search with a hardware latency constraint to find a Sub- Transformer model for target hardware. On the hardware side, since general-purpose platforms are inefficient when performing the attention layers, we further design an accelerator named SpAtten for efficient attention inference. SpAtten introduces a novel token pruning technique to reduce the total memory access and computation. The pruned tokens are selected on-the-fly based on their importance to the sentence, making it fundamentally different from the weight pruning. Therefore, we design a high-parallelism top-k engine to perform the token selection efficiently. SpAtten also supports dynamic low-precision to allow different bitwidths across layers according to the attention probability distribution. Measured on Raspberry Pi, HAT can achieve 3X speedup, 3.7X smaller model size with 12,041X less search cost over baselines.
 
For attention layer inference, SpAtten reduces DRAM access by 10.4X and achieves 193X, 6218X speedup, and 702X, 1244X energy savings over TITAN Xp GPU and Raspberry Pi ARM CPU..
 
Description
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020
 
Cataloged from the official PDF of thesis.
 
Includes bibliographical references (pages 71-81).
 
Date issued
2020
URI
https://hdl.handle.net/1721.1/127440
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Publisher
Massachusetts Institute of Technology
Keywords
Electrical Engineering and Computer Science.

Collections
  • Graduate Theses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.