Show simple item record

dc.contributor.authorSuleiman, Amr AbdulZahir
dc.contributor.authorChen, Yu-Hsin
dc.contributor.authorEmer, Joel S
dc.contributor.authorSze, Vivienne
dc.date.accessioned2017-05-02T14:29:47Z
dc.date.available2017-05-02T14:29:47Z
dc.date.issued2017-05
dc.identifier.otherPaper ID 2436
dc.identifier.otherTopic 15.2
dc.identifier.urihttp://hdl.handle.net/1721.1/108570
dc.description.abstractComputer vision enables a wide range of applications in robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy and/or latency concerns. Accordingly, energy-efficient embedded vision hardware delivering real-time and robust performance is crucial. While deep learning is gaining popularity in several computer vision algorithms, a significant energy consumption difference exists compared to traditional hand-crafted approaches. In this paper, we provide an in-depth analysis of the computation, energy and accuracy trade-offs between learned features such as deep Convolutional Neural Networks (CNN) and hand-crafted features such as Histogram of Oriented Gradients (HOG). This analysis is supported by measurements from two chips that implement these algorithms. Our goal is to understand the source of the energy discrepancy between the two approaches and to provide insight about the potential areas where CNNs can be improved and eventually approach the energy-efficiency of HOG while maintaining its outstanding performance accuracy.en_US
dc.language.isoen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.isversionofhttps://www.epapers2.org/iscas2017/ESR/paper_details.php?PHPSESSID=9ca8ee7f28db7d29c53b9192333d545b&paper_id=2436en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceSzeen_US
dc.titleTowards Closing the Energy Gap Between HOG and CNN Features for Embedded Visionen_US
dc.typeArticleen_US
dc.identifier.citationSuleiman, Amr, Yu-Hsin Chen, Joel Emer, and Vivienne Sze. "Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision." In IEEE International Symposium on Circuits and Systems, ISCAS 2017, May 28-31, Baltimore, MD, USA.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.approverSze, Vivienneen_US
dc.contributor.mitauthorSuleiman, Amr AbdulZahir
dc.contributor.mitauthorChen, Yu-Hsin
dc.contributor.mitauthorEmer, Joel S
dc.contributor.mitauthorSze, Vivienne
dc.relation.journalIEEE International Symposium on Circuits and Systems, ISCAS 2017en_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsSuleiman, Amr; Chen, Yu-Hsin; Emer, Joel; Sze, Vivienneen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-0376-4220
dc.identifier.orcidhttps://orcid.org/0000-0002-4403-956X
dc.identifier.orcidhttps://orcid.org/0000-0002-3459-5466
dc.identifier.orcidhttps://orcid.org/0000-0003-4841-3990
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record