Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification
MetadataShow full item record
Deeper and larger Neural Networks (NNs) have made breakthroughsin many fields. While conventional CMOS-based computing plat-forms are hard to achieve higher energy efficiency. RRAM-basedsystems provide a promising solution to build efficient Training-In-Memory Engines (TIME). While the endurance of RRAM cells islimited, it’s a severe issue as the weights of NN always need to beupdated for thousands to millions of times during training. Gradi-ent sparsification can address this problem by dropping off mostof the smaller gradients but introduce unacceptable computationcost. We proposed an effective framework, SGS-ARS, includingStructured Gradient Sparsification (SGS) and Aging-aware RowSwapping (ARS) scheme, to guarantee write balance across wholeRRAM crossbars and prolong the lifetime of TIME. Our experi-ments demonstrate that 356×lifetime extension is achieved whenTIME is programmed to train ResNet-50 on Imagenet dataset withour SGS-ARS framework.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC)
Institute of Electrical and Electronics Engineers (IEEE)
Cai, Yi et al. “Long Live TIME: Improving Lifetime for Training-In-Memory Engines by Structured Gradient Sparsification.” Paper in the Proceedings of the 55th Annual Design Automation Conference, DAC ’18, San Francisco, CA, June 24-29, 2018, ACM © 2018 The Author(s)
Author's final manuscript