Show simple item record

dc.contributor.authorHan, Song
dc.date.accessioned2021-01-25T18:12:06Z
dc.date.available2021-01-25T18:12:06Z
dc.date.issued2018-06
dc.identifier.isbn9781450357005
dc.identifier.urihttps://hdl.handle.net/1721.1/129549
dc.description.abstractDeeper and larger Neural Networks (NNs) have made breakthroughsin many fields. While conventional CMOS-based computing plat-forms are hard to achieve higher energy efficiency. RRAM-basedsystems provide a promising solution to build efficient Training-In-Memory Engines (TIME). While the endurance of RRAM cells islimited, it’s a severe issue as the weights of NN always need to beupdated for thousands to millions of times during training. Gradi-ent sparsification can address this problem by dropping off mostof the smaller gradients but introduce unacceptable computationcost. We proposed an effective framework, SGS-ARS, includingStructured Gradient Sparsification (SGS) and Aging-aware RowSwapping (ARS) scheme, to guarantee write balance across wholeRRAM crossbars and prolong the lifetime of TIME. Our experi-ments demonstrate that 356×lifetime extension is achieved whenTIME is programmed to train ResNet-50 on Imagenet dataset withour SGS-ARS framework.en_US
dc.description.sponsorshipNational Key R&D Program of China (Grant 2017YFA0207600)en_US
dc.description.sponsorshipNational Natural Science Foundation of China (Grants 61622403, 61621091)en_US
dc.description.sponsorshipChina. Equipment pre-Research and Ministry of Education (Grant 6141A02022608)en_US
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionof10.1109/DAC.2018.8465850en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceother univ websiteen_US
dc.titleLong Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsificationen_US
dc.typeArticleen_US
dc.identifier.citationCai, Yi et al. “Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification.” Paper in the Proceedings of the 55th Annual Design Automation Conference, DAC ’18, San Francisco, CA, June 24-29, 2018, ACM © 2018 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journal2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC)en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2020-12-17T15:42:28Z
dspace.orderedauthorsCai, Y; Lin, Y; Xia, L; Chen, X; Han, S; Wang, Y; Yang, Hen_US
dspace.date.submission2020-12-17T15:42:32Z
mit.journal.volume2018en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record