Show simple item record

dc.contributor.advisorSong Han.en_US
dc.contributor.authorWang, Hanrui,S.M.Massachusetts Institute of Technology.en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2020-09-15T21:58:08Z
dc.date.available2020-09-15T21:58:08Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127440
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 71-81).en_US
dc.description.abstractNatural Language Processing (NLP) is essential for many real-world applications, such as machine translation and chatbots. Recently, NLP is witnessing rapid progresses driven by Transformer models with the attention mechanism. Though enjoying the high performance, Transformers are challenging to deploy due to the intensive computation. In this thesis, we present an algorithm-hardware co-design approach to enable efficient Transformer inference. On the algorithm side, we propose Hardware- Aware Transformer (HAT) framework to leverage Neural Architecture Search (NAS) to search for a specialized low-latency Transformer model for each hardware. We construct a large design space with the novel arbitrary encoder-decoder attention and heterogeneous layers. Then a SuperTransformer that covers all candidates in the design space is trained and efficiently produces many SubTransformers with weight sharing.en_US
dc.description.abstractWe perform an evolutionary search with a hardware latency constraint to find a Sub- Transformer model for target hardware. On the hardware side, since general-purpose platforms are inefficient when performing the attention layers, we further design an accelerator named SpAtten for efficient attention inference. SpAtten introduces a novel token pruning technique to reduce the total memory access and computation. The pruned tokens are selected on-the-fly based on their importance to the sentence, making it fundamentally different from the weight pruning. Therefore, we design a high-parallelism top-k engine to perform the token selection efficiently. SpAtten also supports dynamic low-precision to allow different bitwidths across layers according to the attention probability distribution. Measured on Raspberry Pi, HAT can achieve 3X speedup, 3.7X smaller model size with 12,041X less search cost over baselines.en_US
dc.description.abstractFor attention layer inference, SpAtten reduces DRAM access by 10.4X and achieves 193X, 6218X speedup, and 702X, 1244X energy savings over TITAN Xp GPU and Raspberry Pi ARM CPU..en_US
dc.description.statementofresponsibilityby Hanrui Wang.en_US
dc.format.extent81 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleEfficient algorithms and hardware for Natural Language Processingen_US
dc.title.alternativeEfficient algorithms and hardware for NLPen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.oclc1192966271en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienceen_US
dspace.imported2020-09-15T21:58:07Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentEECSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record