Show simple item record

dc.contributor.authorWang, Kuan
dc.contributor.authorLiu, Zhijian
dc.contributor.authorLin, Yujun
dc.contributor.authorLin, Ji
dc.contributor.authorHan, Song
dc.date.accessioned2021-09-20T17:17:24Z
dc.date.available2021-09-20T17:17:24Z
dc.date.issued2020-06-11
dc.identifier.urihttps://hdl.handle.net/1721.1/131513
dc.description.abstractAbstract Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support flexible bitwidth (1–8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off accuracy, latency, energy, and model size, which is both time-consuming and usually sub-optimal. There are plenty of specialized hardware accelerators for neural networks, but little research has been done to design specialized neural networks optimized for a particular hardware accelerator. The latter is demanding given the much longer design cycle of silicon than neural nets. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which automatically determine the quantization policy, and we take the hardware accelerator’s feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate the direct feedback signals to the RL agent. Compared with other conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4–1.95  $$\times $$ × and the energy consumption by 1.9  $$\times $$ × with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.en_US
dc.publisherSpringer USen_US
dc.relation.isversionofhttps://doi.org/10.1007/s11263-020-01339-6en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSpringer USen_US
dc.titleHardware-Centric AutoML for Mixed-Precision Quantizationen_US
dc.typeArticleen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2020-09-24T21:24:28Z
dc.language.rfc3066en
dc.rights.holderSpringer Science+Business Media, LLC, part of Springer Nature
dspace.embargo.termsY
dspace.date.submission2020-09-24T21:24:28Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record