Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
Author(s)Chen, Yu-Hsin; Krishna, Tushar; Emer, Joel S.; Sze, Vivienne
MetadataShow full item record
Deep learning using convolutional neural networks (CNN) gives state-of-the-art accuracy on many computer vision tasks (e.g. object detection, recognition, segmentation). Convolutions account for over 90% of the processing in CNNs for both inference/testing and training, and fully convolutional networks are increasingly being used. To achieve state-of-the-art accuracy requires CNNs with not only a larger number of layers, but also millions of filters weights, and varying shapes (i.e. filter sizes, number of filters, number of channels) as shown in Fig. 14.5.1. For instance, AlexNet  uses 2.3 million weights (4.6MB of storage) and requires 666 million MACs per 227×227 image (13kMACs/pixel). VGG16  uses 14.7 million weights (29.4MB of storage) and requires 15.3 billion MACs per 224×224 image (306kMACs/pixel). The large number of filter weights and channels results in substantial data movement, which consumes significant energy.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
IEEE International Conference on Solid-State Circuits (ISSCC 2016)
Institute of Electrical and Electronics Engineers (IEEE)
Chen, Yu-Hsin, Tushar Krishna, Joel Emer, and Vivienne Sze. "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks." in ISSCC 2016, IEEE International Solid-State Circuits Conference, Jan. 31-Feb. 4, 2016. San Francisco, CA.
Author's final manuscript