dc.contributor.author | Suleiman, Amr AbdulZahir | |
dc.contributor.author | Chen, Yu-Hsin | |
dc.contributor.author | Emer, Joel S | |
dc.contributor.author | Sze, Vivienne | |
dc.date.accessioned | 2017-05-02T14:29:47Z | |
dc.date.available | 2017-05-02T14:29:47Z | |
dc.date.issued | 2017-05 | |
dc.identifier.other | Paper ID 2436 | |
dc.identifier.other | Topic 15.2 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/108570 | |
dc.description.abstract | Computer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy and/or latency concerns. Accordingly, energy-efficient embedded vision hardware delivering real-time and robust performance is crucial. While deep learning is gaining popularity in several computer vision algorithms, a significant energy consumption difference exists compared to traditional hand-crafted approaches. In this paper, we provide an in-depth analysis of the computation, energy and accuracy trade-offs between learned features such as deep Convolutional Neural Networks (CNN) and hand-crafted features such as Histogram of Oriented Gradients (HOG). This analysis is supported by measurements from two chips that implement these algorithms. Our goal is to understand the source of the energy discrepancy between the two approaches and to provide insight about the potential areas where CNNs can be improved and eventually approach the energy-efficiency of HOG while maintaining its outstanding performance accuracy. | en_US |
dc.language.iso | en_US | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.isversionof | https://www.epapers2.org/iscas2017/ESR/paper_details.php?PHPSESSID=9ca8ee7f28db7d29c53b9192333d545b&paper_id=2436 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | Sze | en_US |
dc.title | Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Suleiman, Amr, Yu-Hsin Chen, Joel Emer, and Vivienne Sze. "Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision." In IEEE International Symposium on Circuits and Systems, ISCAS 2017, May 28-31, Baltimore, MD, USA. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.contributor.approver | Sze, Vivienne | en_US |
dc.contributor.mitauthor | Suleiman, Amr AbdulZahir | |
dc.contributor.mitauthor | Chen, Yu-Hsin | |
dc.contributor.mitauthor | Emer, Joel S | |
dc.contributor.mitauthor | Sze, Vivienne | |
dc.relation.journal | IEEE International Symposium on Circuits and Systems, ISCAS 2017 | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
dspace.orderedauthors | Suleiman, Amr; Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne | en_US |
dspace.embargo.terms | N | en_US |
dc.identifier.orcid | https://orcid.org/0000-0002-0376-4220 | |
dc.identifier.orcid | https://orcid.org/0000-0002-4403-956X | |
dc.identifier.orcid | https://orcid.org/0000-0002-3459-5466 | |
dc.identifier.orcid | https://orcid.org/0000-0003-4841-3990 | |
mit.license | OPEN_ACCESS_POLICY | en_US |
mit.metadata.status | Complete | |