dc.contributor.advisor | Seth Teller. | en_US |
dc.contributor.author | Landa, Yafim | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2014-03-06T15:41:57Z | |
dc.date.available | 2014-03-06T15:41:57Z | |
dc.date.copyright | 2013 | en_US |
dc.date.issued | 2013 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/85437 | |
dc.description | Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 91-94). | en_US |
dc.description.abstract | We show how to exploit temporal and spatial coherence of image observations to achieve efficient and effective text detection and decoding for a sensor suite moving through an environment rich in text at a variety of scales and orientations with respect to the observer. We use simultaneous localization and mapping (SLAM) to isolate planar "tiles" representing scene surfaces and prioritize each tile according to its distance and obliquity with respect to the sensor, and how recently (if ever) and at what scale the tile has been inspected for text. We can also incorporate prior expectations about the spatial locus and scale at which text occurs in the world, for example more often on vertical surfaces than non-vertical surfaces, and more often at shoulder height than at knee height. Finally, we can use SLAM-produced information about scene surfaces (e.g. standoff, orientation) and egomotion (e.g. yaw rate) to focus the system's text extraction efforts where they are likely to produce usable text rather than garbage. The technique enables text detection and decoding to run effectively at frame rate on the sensor's full surround, even though the CPU resources typically available on a mobile platform (robot, wearable or handheld device) are not sufficient to such methods on full images at sensor rates. Moreover, organizing detected text in a locally stable 3D frame enables combination of multiple noisy text observations into a single higher-confidence estimate of environmental text. | en_US |
dc.description.statementofresponsibility | by Yafim Landa. | en_US |
dc.format.extent | 94 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Prioritized text spotting using SLAM | en_US |
dc.title.alternative | Prioritized text spotting using simultaneous localization and mapping | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M. Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 870682992 | en_US |