Show simple item record

dc.contributor.advisorJonathan P. How.en_US
dc.contributor.authorHasfura, Andrés Michael Leveringen_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.en_US
dc.date.accessioned2016-12-22T15:18:29Z
dc.date.available2016-12-22T15:18:29Z
dc.date.copyright2016en_US
dc.date.issued2016en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/106011
dc.descriptionThesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.en_US
dc.descriptionThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.en_US
dc.descriptionCataloged from student-submitted PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 69-72).en_US
dc.description.abstractThis paper presents a pedestrian detection and tracking system to be used aboard mobility on demand systems. Mobility on demand is a transportation paradigm in which a fleet of vehicles is shared among a community, with rides provided upon request. The proposed system is capable of robustly gathering pedestrian paths in space using 2D LiDAR and monocular cameras mounted onboard a moving vehicle. These gathered pedestrian paths can later be used to infer network traffic to learn to anticipate the location of ride requests throughout a day. This allows mobility on demand systems to more efficiently utilize resources, saving money and time while providing a more favorable experience for customers. The onboard LiDAR is used to cluster and track objects through space using the Dynamic Means algorithm. Pedestrian detection is performed on images from the mounted cameras by extracting a combination of histogram of oriented gradients and LUV color channel features which are then classified by a set of learned decision trees. Temporal information is leveraged to achieve higher detection quality by accruing classification votes. Both a standard fusion technique and a novel extrinsic calibration error-resistant fusion method are tested to fuse camera and LiDAR information for pedestrian path collection. The novel error-resistant fusion system is shown to outperform standard fusion techniques under both normal conditions and when synthetic extrinsic calibration noise is added. System robustness and quality is demonstrated by experiments carried out in real world environments, including the target environment, a university campus.en_US
dc.description.statementofresponsibilityby Andrés Michael Levering Hasfura.en_US
dc.format.extent72 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectElectrical Engineering and Computer Science.en_US
dc.titlePedestrian detection and tracking for mobility on demanden_US
dc.typeThesisen_US
dc.description.degreeM. Eng.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.oclc965827876en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record