Show simple item record

dc.contributor.authorWang, Kuan
dc.contributor.authorLiu, Zhijian
dc.contributor.authorLin, Yujun
dc.contributor.authorLin, Ji
dc.contributor.authorHan, Song
dc.date.accessioned2021-01-22T13:26:59Z
dc.date.available2021-01-22T13:26:59Z
dc.date.issued2019-06
dc.identifier.isbn9781728132938
dc.identifier.isbn9781728132945
dc.identifier.urihttps://hdl.handle.net/1721.1/129522
dc.description.abstractModel quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, energy, and model size, which is both time-consuming and sub-optimal. There are plenty of specialized hardware for neural networks, but little research has been done for specialized neural network optimization for a particular hardware architecture. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in a uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals (latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, energy and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/CVPR.2019.00881en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleHAQ: Hardware-Aware Automated Quantization With Mixed Precisionen_US
dc.typeArticleen_US
dc.identifier.citationWang, Kuan et al. “HAQ: Hardware-Aware Automated Quantization With Mixed Precision.” Paper in the Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach CA, 16-20 June 2019, IEEE © 2019 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2019 IEEE/CVF Conference on Computer Vision and Pattern Recognitionen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-17T16:02:59Z
dspace.orderedauthorsWang, K; Liu, Z; Lin, Y; Lin, J; Han, Sen_US
dspace.date.submission2020-12-17T16:03:03Z
mit.journal.volume2019-Juneen_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record