dc.contributor.author | Bertsekas, Dimitri | en_US |
dc.coverage.temporal | Fall 2008 | en_US |
dc.date.issued | 2008-12 | |
dc.identifier | 6.231-Fall2008 | |
dc.identifier | local: 6.231 | |
dc.identifier | local: IMSCP-MD5-5c25e9035021832542e5f35f56b312cc | |
dc.identifier.uri | http://hdl.handle.net/1721.1/75813 | |
dc.description.abstract | This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will consider optimal control of a dynamical system over both a finite and an infinite number of stages (finite and infinite horizon). We will also discuss some approximation methods for problems involving large state spaces. Applications of dynamic programming in a variety of fields will be covered in recitations. | en_US |
dc.language | en-US | en_US |
dc.relation | | en_US |
dc.rights.uri | Usage Restrictions: This site (c) Massachusetts Institute of Technology 2012. Content within individual courses is (c) by the individual authors unless otherwise noted. The Massachusetts Institute of Technology is providing this Work (as defined below) under the terms of this Creative Commons public license ("CCPL" or "license") unless otherwise noted. The Work is protected by copyright and/or other applicable law. Any use of the work other than as authorized under this license is prohibited. By exercising any of the rights to the Work provided here, You (as defined below) accept and agree to be bound by the terms of this license. The Licensor, the Massachusetts Institute of Technology, grants You the rights contained here in consideration of Your acceptance of such terms and conditions. | en_US |
dc.subject | dynamic programming | en_US |
dc.subject | stochastic control | en_US |
dc.subject | decision making | en_US |
dc.subject | uncertainty | en_US |
dc.subject | sequential decision making | en_US |
dc.subject | finite horizon | en_US |
dc.subject | infinite horizon | en_US |
dc.subject | approximation methods | en_US |
dc.subject | state space | en_US |
dc.subject | large state space | en_US |
dc.subject | optimal control | en_US |
dc.subject | dynamical system | en_US |
dc.subject | dynamic programming and optimal control | en_US |
dc.subject | deterministic systems | en_US |
dc.subject | shortest path | en_US |
dc.subject | state information | en_US |
dc.subject | rollout | en_US |
dc.subject | stochastic shortest path | en_US |
dc.subject | approximate dynamic programming | en_US |
dc.title | 6.231 Dynamic Programming and Stochastic Control, Fall 2008 | en_US |
dc.title.alternative | Dynamic Programming and Stochastic Control | en_US |
dc.type | Learning Object | |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |