Show simple item record

dc.contributor.advisorNicholas Roy and Milton B. Adams.en_US
dc.contributor.authorVuong, Hon Fai, 1975-en_US
dc.contributor.otherMassachusetts Institute of Technology. Dept. of Aeronautics and Astronautics.en_US
dc.date.accessioned2009-01-23T14:56:18Z
dc.date.available2009-01-23T14:56:18Z
dc.date.copyright2007en_US
dc.date.issued2007en_US
dc.identifier.urihttp://dspace.mit.edu/handle/1721.1/42173en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/42173
dc.descriptionThesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.en_US
dc.descriptionIncludes bibliographical references (p. 171-175).en_US
dc.description.abstractCreating agents that behave rationally in the real-world is one goal of Artificial Intelligence. A rational agent is one that takes, at each point in time, the optimal action such that its expected utility is maximized. However, to determine the optimal action the agent may need to engage in lengthy deliberations or computations. The effect of computation is generally not explicitly considered when performing deliberations. In reality, spending too much time in deliberation may yield high quality plans that do not satisfy the natural timing constraints of a problem, making them effectively useless. Enforcing shortened deliberation times may yield timely plans, but these may be of diminished utility. These two cases suggest the possibility of optimizing an agent's deliberation process. This thesis proposes a framework for generating meta level controllers that select computational actions to perform by optimally trading off their benefit against their costs. The metalevel optimization problem is posed within a Markov Decision Process framework and is solved off-line to determine a policy for carrying out computations. Once the optimal policy is determined, it serves efficiently as an online metalevel controller that selects computational actions conditioned upon the current state of computation. Solving for the exact policy of the metalevel optimization problem becomes computationally intractable with problem size. A learning approach that takes advantage of the problem structure is proposed to generate approximate policies that are shown to perform relatively well in comparison to optimal policies. Metalevel policies are generated for two types of problem scenarios, distinguished by the representation of the cost of computation. In the first case, the cost of computation is explicitly defined as part of the problem description. In the second case, it is implicit in the timing constraints of problem. Results are presented to validate the beneficial effects of metalevel planning over traditional methods when the cost of computation has a significant effect on the utility of a plan.en_US
dc.description.statementofresponsibilityby Hon Fai Vuong.en_US
dc.format.extent175 p.en_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsM.I.T. theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission. See provided URL for inquiries about permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/42173en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleLearning bounded optimal behavior using Markov decision processesen_US
dc.typeThesisen_US
dc.description.degreePh.D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronautics
dc.identifier.oclc228861303en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record