Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision
Author(s)Suleiman, Amr AbdulZahir; Chen, Yu-Hsin; Emer, Joel S; Sze, Vivienne
MetadataShow full item record
Computer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy and/or latency concerns. Accordingly, energy-efficient embedded vision hardware delivering real-time and robust performance is crucial. While deep learning is gaining popularity in several computer vision algorithms, a significant energy consumption difference exists compared to traditional hand-crafted approaches. In this paper, we provide an in-depth analysis of the computation, energy and accuracy trade-offs between learned features such as deep Convolutional Neural Networks (CNN) and hand-crafted features such as Histogram of Oriented Gradients (HOG). This analysis is supported by measurements from two chips that implement these algorithms. Our goal is to understand the source of the energy discrepancy between the two approaches and to provide insight about the potential areas where CNNs can be improved and eventually approach the energy-efficiency of HOG while maintaining its outstanding performance accuracy.
DepartmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
IEEE International Symposium on Circuits and Systems, ISCAS 2017
Institute of Electrical and Electronics Engineers (IEEE)
Suleiman, Amr, Yu-Hsin Chen, Joel Emer, and Vivienne Sze. "Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision." In IEEE International Symposium on Circuits and Systems, ISCAS 2017, May 28-31, Baltimore, MD, USA.
Author's final manuscript
Paper ID 2436