MIT Libraries logoDSpace@MIT

MIT
View Item 
  • DSpace@MIT Home
  • MIT OpenCourseWare (MIT OCW) - Archived Content
  • MIT OCW Archived Courses
  • MIT OCW Archived Courses
  • View Item
  • DSpace@MIT Home
  • MIT OpenCourseWare (MIT OCW) - Archived Content
  • MIT OCW Archived Courses
  • MIT OCW Archived Courses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

6.231 Dynamic Programming and Stochastic Control, Fall 2002

Author(s)
Bertsekas, Dimitri P.
No Thumbnail [100%x160]
Download6-231Fall-2002/OcwWeb/Electrical-Engineering-and-Computer-Science/6-231Dynamic-Programming-and-Stochastic-ControlFall2002/CourseHome/index.htm (15.35Kb)
Alternative title
Dynamic Programming and Stochastic Control
Terms of use
Usage Restrictions: This site (c) Massachusetts Institute of Technology 2003. Content within individual courses is (c) by the individual authors unless otherwise noted. The Massachusetts Institute of Technology is providing this Work (as defined below) under the terms of this Creative Commons public license ("CCPL" or "license"). The Work is protected by copyright and/or other applicable law. Any use of the work other than as authorized under this license is prohibited. By exercising any of the rights to the Work provided here, You (as defined below) accept and agree to be bound by the terms of this license. The Licensor, the Massachusetts Institute of Technology, grants You the rights contained here in consideration of Your acceptance of such terms and conditions.
Metadata
Show full item record
Abstract
Sequential decision-making via dynamic programming. Unified approach to optimal control of stochastic dynamic systems and Markovian decision problems. Applications in linear-quadratic control, inventory control, and resource allocation models. Optimal decision making under perfect and imperfect state information. Certainty equivalent and open loop-feedback control, and self-tuning controllers. Infinite horizon problems, successive approximation, and policy iteration. Discounted problems, stochastic shortest path problems, and average cost problems. Optimal stopping, scheduling, and control of queues. Approximations and neurodynamic programming. From the course home page: Course Description This course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). Approximation methods for problems involving large state spaces are also presented and discussed.
Date issued
2002-12
URI
http://hdl.handle.net/1721.1/46352
Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Other identifiers
6.231-Fall2002
local: 6.231
local: IMSCP-MD5-e3207e9240f070692ace105c9aa57136
Keywords
dynamic programming, stochastic control, mathematics, optimization, algorithms, probability, Markov chains, optimal control, Dynamic programming, Stochastic control theory

Collections
  • MIT OCW Archived Courses

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

Login

Statistics

OA StatisticsStatistics by CountryStatistics by Department
MIT Libraries
PrivacyPermissionsAccessibilityContact us
MIT
Content created by the MIT Libraries, CC BY-NC unless otherwise noted. Notify us about copyright concerns.
NoThumbnail