dc.contributor.advisor | Leslie Pack Kaelbling. | en_US |
dc.contributor.author | Shin, Jongu | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2011-05-09T15:17:21Z | |
dc.date.available | 2011-05-09T15:17:21Z | |
dc.date.copyright | 2010 | en_US |
dc.date.issued | 2010 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/62670 | |
dc.description | Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010. | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (p. 79). | en_US |
dc.description.abstract | Our goal is to construct a system that can determine a drivers preferences and goals and perform appropriate actions to aid the driver achieving his goals and improve the quality of his road behavior. Because the recommendation problem could be achieved effectively once we know the driver's intention, in this thesis, we are going to solve the problem to determine the driver's preferences. A supervised learning approach has already been applied to this problem. However, because the approach locally classify a small interval at a time and is memoryless, the supervised learning does not perform well on our goal. Instead, we need to introduce new approach which has following characteristics. First, it should consider the entire stream of measurements. Second, it should be tolerant to the environment. Third, it should be able to distinguish various intentions. In this thesis, two different approaches, Bayesian hypothesis testing and inverse reinforcement learning, will be used to classify and estimate the user's preferences. Bayesian hypothesis testing classifies the driver as one of several driving types. Assuming that the probability distributions of the features (i.e. average, standard deviation) for a short period of measurement are different among the driving types, Bayesian hypothesis testing classifies the driver as one of driving types by maintaining a belief distribution for each driving type and updating it online as more measurements are available. On the other hand, inverse reinforcement learning estimates the users' preferences as a linear combination of driving types. The inverse reinforcement learning approach assumes that the driver maximizes a reward function while driving, and his reward function is a linear combination of raw / expert features. Based on the observed trajectories of representative drivers, apprenticeship learning first calculates the reward function of each driving type with raw features, and these reward functions serve as expert features. After, with observed trajectories of a new driver, the same algorithm calculates the reward function of him, not with raw features, but with expert features, and estimates the preferences of any driver in a space of driving types. | en_US |
dc.description.statementofresponsibility | by Jongu Shin. | en_US |
dc.format.extent | 79 p. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | M.I.T. theses are protected by
copyright. They may be viewed from this source for any purpose, but
reproduction or distribution in any format is prohibited without written
permission. See provided URL for inquiries about permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Modeling users' powertrain preferences | en_US |
dc.type | Thesis | en_US |
dc.description.degree | M.Eng. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
dc.identifier.oclc | 714250456 | en_US |