Show simple item record

dc.contributor.authorChen, Yu-Hsin
dc.contributor.authorKrishna, Tushar
dc.contributor.authorEmer, Joel S.
dc.contributor.authorSze, Vivienne
dc.date.accessioned2016-02-10T20:59:22Z
dc.date.available2016-02-10T20:59:22Z
dc.date.issued2016-02
dc.identifier.isbn978-1-4673-9467-3
dc.identifier.urihttp://hdl.handle.net/1721.1/101151
dc.description.abstractDeep learning using convolutional neural networks (CNN) gives state-of-the-art accuracy on many computer vision tasks (e.g. object detection, recognition, segmentation). Convolutions account for over 90% of the processing in CNNs for both inference/testing and training, and fully convolutional networks are increasingly being used. To achieve state-of-the-art accuracy requires CNNs with not only a larger number of layers, but also millions of filters weights, and varying shapes (i.e. filter sizes, number of filters, number of channels) as shown in Fig. 14.5.1. For instance, AlexNet [1] uses 2.3 million weights (4.6MB of storage) and requires 666 million MACs per 227×227 image (13kMACs/pixel). VGG16 [2] uses 14.7 million weights (29.4MB of storage) and requires 15.3 billion MACs per 224×224 image (306kMACs/pixel). The large number of filter weights and channels results in substantial data movement, which consumes significant energy.en_US
dc.description.sponsorshipUnited States. Defense Advanced Research Projects Agency (DARPA YFA grant N66001-14-1-4039)en_US
dc.description.sponsorshipIntel Corporationen_US
dc.description.sponsorshipMassachusetts Institute of Technology. Center for Integrated Circuits and Systemsen_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttps://submissions.mirasmart.com/isscc2016/PDF/ISSCC2016AdvanceProgram.pdfen_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSzeen_US
dc.titleEyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networksen_US
dc.typeArticleen_US
dc.identifier.citationChen, Yu-Hsin, Tushar Krishna, Joel Emer, and Vivienne Sze. "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks." in ISSCC 2016, IEEE International Solid-State Circuits Conference, Jan. 31-Feb. 4, 2016. San Francisco, CA.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.approverSze, Vivienneen_US
dc.contributor.mitauthorChen, Yu-Hsinen_US
dc.contributor.mitauthorKrishna, Tusharen_US
dc.contributor.mitauthorEmer, Joel S.en_US
dc.contributor.mitauthorSze, Vivienneen_US
dc.relation.journalIEEE International Conference on Solid-State Circuits (ISSCC 2016)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsChen, Yu-Hsin; Krishna, Tushar; Emer, Joel; Sze, Vivienneen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-3459-5466
dc.identifier.orcidhttps://orcid.org/0000-0002-4403-956X
dc.identifier.orcidhttps://orcid.org/0000-0003-4841-3990
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record