| dc.contributor.advisor | Nicholas Roy. | en_US |
| dc.contributor.author | Kollar, Thomas (Thomas Fleming) | en_US |
| dc.contributor.other | Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. | en_US |
| dc.date.accessioned | 2008-02-27T22:44:15Z | |
| dc.date.available | 2008-02-27T22:44:15Z | |
| dc.date.copyright | 2007 | en_US |
| dc.date.issued | 2007 | en_US |
| dc.identifier.uri | http://hdl.handle.net/1721.1/40531 | |
| dc.description | Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007. | en_US |
| dc.description | Includes bibliographical references (leaves 93-96). | en_US |
| dc.description.abstract | The mapping problem has received considerable attention in robotics recently. Mature techniques now allow practitioners to reliably and consistently generate 2-D and 3-D maps of objects, office buildings, city blocks and metropolitan areas with a comparatively small number of errors. Nevertheless, the ease of construction and quality of map are strongly dependent on the exploration strategy used to acquire sensor data. Most exploration strategies concentrate on selecting the next best measurement to take, trading off information gathering for regular relocalization. What has not been studied so far is the effect the robot controller has on the map quality. Certain kinds of robot motion (e.g, sharp turns) are hard to estimate correctly, and increase the likelihood of errors in the mapping process. We show how reinforcement learning can be used to generate better motion control. The learned policy will be shown to reduce the overall map uncertainty and squared error, while jointly reducing data-association errors. | en_US |
| dc.description.statementofresponsibility | by Thomas Kollar. | en_US |
| dc.format.extent | 96 leaves | en_US |
| dc.language.iso | eng | en_US |
| dc.publisher | Massachusetts Institute of Technology | en_US |
| dc.rights | M.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission. | en_US |
| dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | |
| dc.subject | Electrical Engineering and Computer Science. | en_US |
| dc.title | Optimizing robot trajectories using reinforcement learning | en_US |
| dc.type | Thesis | en_US |
| dc.description.degree | S.M. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| dc.identifier.oclc | 191913909 | en_US |