Accelerated CNN Training through Gradient Approximation
Author(s)
Wang, Ziheng; Nelaturu, Sree Harsha; Amarasinghe, Saman P
DownloadAccepted version (1.061Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2019 IEEE. Training deep convolutional neural networks such as VGG and ResNet by gradient descent is an expensive exercise requiring specialized hardware such as GPUs. Recent works have examined the possibility of approximating the gradient computation while maintaining the same convergence properties. While promising, the approximations only work on relatively small datasets such as MNIST. They also fail to achieve real wall-clock speedups due to lack of efficient GPU implementations of the proposed approximation methods. In this work, we explore three alternative methods to approximate gradients, with an efficient GPU kernel implementation for one of them. We achieve wall-clock speedup with ResNet-20 and VGG-19 on the CIFAR-10 dataset upwards of 7 percent, with a minimal loss in validation accuracy.
Date issued
2019Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
Proceedings - 2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications, EMC2 2019
Publisher
IEEE