| dc.contributor.author | Han, Song | |
| dc.date.accessioned | 2021-01-25T18:12:06Z | |
| dc.date.available | 2021-01-25T18:12:06Z | |
| dc.date.issued | 2018-06 | |
| dc.identifier.isbn | 9781450357005 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/129549 | |
| dc.description.abstract | Deeper and larger Neural Networks (NNs) have made breakthroughsin many fields. While conventional CMOS-based computing plat-forms are hard to achieve higher energy efficiency. RRAM-basedsystems provide a promising solution to build efficient Training-In-Memory Engines (TIME). While the endurance of RRAM cells islimited, it’s a severe issue as the weights of NN always need to beupdated for thousands to millions of times during training. Gradi-ent sparsification can address this problem by dropping off mostof the smaller gradients but introduce unacceptable computationcost. We proposed an effective framework, SGS-ARS, includingStructured Gradient Sparsification (SGS) and Aging-aware RowSwapping (ARS) scheme, to guarantee write balance across wholeRRAM crossbars and prolong the lifetime of TIME. Our experi-ments demonstrate that 356×lifetime extension is achieved whenTIME is programmed to train ResNet-50 on Imagenet dataset withour SGS-ARS framework. | en_US |
| dc.description.sponsorship | National Key R&D Program of China (Grant 2017YFA0207600) | en_US |
| dc.description.sponsorship | National Natural Science Foundation of China (Grants 61622403, 61621091) | en_US |
| dc.description.sponsorship | China. Equipment pre-Research and Ministry of Education (Grant 6141A02022608) | en_US |
| dc.language.iso | en | |
| dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
| dc.relation.isversionof | 10.1109/DAC.2018.8465850 | en_US |
| dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
| dc.source | other univ website | en_US |
| dc.title | Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Cai, Yi et al. “Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification.” Paper in the Proceedings of the 55th Annual Design Automation Conference, DAC ’18, San Francisco, CA, June 24-29, 2018, ACM © 2018 The Author(s) | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.relation.journal | 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC) | en_US |
| dc.eprint.version | Author's final manuscript | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2020-12-17T15:42:28Z | |
| dspace.orderedauthors | Cai, Y; Lin, Y; Xia, L; Chen, X; Han, S; Wang, Y; Yang, H | en_US |
| dspace.date.submission | 2020-12-17T15:42:32Z | |
| mit.journal.volume | 2018 | en_US |
| mit.license | OPEN_ACCESS_POLICY | |
| mit.metadata.status | Complete | |