6.231 Dynamic Programming and Stochastic Control, Fall 2002
Author(s)
Bertsekas, Dimitri P.
Download6-231Fall-2002/OcwWeb/Electrical-Engineering-and-Computer-Science/6-231Dynamic-Programming-and-Stochastic-ControlFall2002/CourseHome/index.htm (15.35Kb)
Alternative title
Dynamic Programming and Stochastic Control
Metadata
Show full item recordAbstract
Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, and resource allocation models. Optimal decision making under perfect and imperfect state information. Certainty equivalent and open loop-feedback control, and self-tuning controllers. Infinite horizon problems, successive approximation, and policy iteration. Discounted problems, stochastic shortest path problems, and average cost problems. Optimal stopping, scheduling, and control of queues. Approximations and neurodynamic programming. From the course home page: Course Description This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed.
Date issued
2002-12Other identifiers
6.231-Fall2002
local: 6.231
local: IMSCP-MD5-e3207e9240f070692ace105c9aa57136
Keywords
dynamic programming, stochastic control, mathematics, optimization, algorithms, probability, Markov chains, optimal control, Dynamic programming, Stochastic control theory