Show simple item record

dc.contributor.advisorNicholas Roy and Richard Madison.en_US
dc.contributor.authorGiraldez, Dember Alexanderen_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2013-02-14T15:32:26Z
dc.date.available2013-02-14T15:32:26Z
dc.date.copyright2011en_US
dc.date.issued2011en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/76960
dc.descriptionThesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (p. 79-81).en_US
dc.description.abstractThe process of estimating motion trajectory through an unknown environment from a monocular image sequence is one of the main challenges in Micro Air Vehicle (MAV) navigation. Today MAVs are becoming more and more prevalent in both civilian and military operations. However, with their reduction in size compared to traditional Unmanned Aircraft Vehicles (UAVs), the computational power and payload that can be carried onboard is limited. While there is ample research in motion estimation for systems that are deployed on the ground, have various sensors, as well as multiple cameras, a current challenge consists of deploying minimalistic systems suited specifically for MAVs. This thesis presents a novel approach for six-degrees of freedom motion estimation using a monocular camera containing a Field-Programmable-Gate-Array (FPGA). Most implementations using a monocular camera onboard MAVs stream images to a ground station for processing. However, an FPGA can be programmed for feature extraction, so instead of sending raw images, information is encoded by the FPGA and only frame information, feature locations, and descriptors are transmitted. This onboard precomputation greatly reduces bandwidth usage and ground station processing. The objectives of this research are (1) to show how the raw computing power of an FPGA can be exploited in this application and (2) to evaluate the performance of such a system against a traditional monocular camera implementation. The underlying motivation is to bring MAV systems closer to complete autonomy, meaning all the computation needed for estimation and navigation is carried out autonomously and onboard.en_US
dc.description.statementofresponsibilityby Dember Alexander Giraldez.en_US
dc.format.extent81 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titleFPGA-aided MAV vision-based estimationen_US
dc.title.alternativeField-Programmable-Gate-Array-aided Micro Air Vehicle vision-based estimation.en_US
dc.typeThesisen_US
dc.description.degreeM.Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc824730369en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record