Show simple item record

dc.contributor.advisorVivienne Sze.en_US
dc.contributor.authorSuleiman, Amr S. (Amr AbdulZahir)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2018-09-17T14:52:09Z
dc.date.available2018-09-17T14:52:09Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/117847
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 143-149).en_US
dc.description.abstractAutonomy is becoming an increasingly desirable feature for very small nano/pico robots to navigate cluttered and confined indoors environments such as collapsed buildings, caves, etc. Robot perception (i.e., semantic and geometric understanding) is considered the computation bottleneck in autonomous navigation systems because of the high dimensionality of the problem. For example, multi-scale object detection is desired for robustness, which requires significant data expansion. Additionally, a 3D map size grows overtime while the robot explores the environment, which requires computation power and large memory size. In this thesis, we introduce ASIC solutions that enable real-time and low power perception. First, the thesis demonstrates energy-efficient and high-throughput object detection accelerators for semantic understanding, which can process full HD (19201080, 60 fps) videos with energy consumption between 0.36 to 1.74 nJ/pixel. On-the-fly processing, parallel architectures, and image pre-processing are used to reduce the overhead of multi-scale detection using rigid-body models. Detection accuracy can be doubled with deformable parts models, but requires 35 more computation. To overcome this overhead, we exploit data compression, computation pruning, and basis projection for an overall 5 power reduction and 3.6 smaller memory size. Second, this thesis presents an algorithm and hardware co-design approach to enable real-time and energy-efficient localization and mapping for geometric understanding, using visual-inertial odometry. The chip (Navion) processes 752480 stereo frames at up to 171 fps, with an energy consumption between 1.6 to 3.5 nJ/pixel. Parallelism, rescheduling, resource sharing, exploiting sparsity, and image compression are applied to overcome the high dimensionality of the problem, resulting in 4.1 memory size reduction, and enabling full integration. Navion can adapt to different environments to maximize accuracy, throughput and energy-efficiency trade-offs. To the best of our knowledge, this thesis presents the first fully integrated VIO system in an ASIC.en_US
dc.description.statementofresponsibilityby Amr Suleiman.en_US
dc.format.extent149 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleEnergy efficient accelerators for autonomous navigation in miniaturized robotsen_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc1052124202en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record