Show simple item record

dc.contributor.authorHuynh, Vu Anh
dc.contributor.authorKaraman, Sertac
dc.contributor.authorFrazzoli, Emilio
dc.date.accessioned2018-06-12T17:35:16Z
dc.date.available2018-06-12T17:35:16Z
dc.date.issued2016-02
dc.identifier.issn0278-3649
dc.identifier.issn1741-3176
dc.identifier.urihttp://hdl.handle.net/1721.1/116272
dc.description.abstractIn this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Using the Markov chain approximation method and recent advances in sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process to incrementally compute control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise. Keywords: Stochastic optimal control, dynamical systems, randomized methods, roboticsen_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant CNS-1016213)en_US
dc.description.sponsorshipArthur & Linda Gelb Tr Charitable Foundationen_US
dc.publisherSAGE Publicationsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1177/0278364915616866en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT Web Domainen_US
dc.titleAn incremental sampling-based algorithm for stochastic optimal controlen_US
dc.typeArticleen_US
dc.identifier.citationHuynh, Vu Anh, et al. “An Incremental Sampling-Based Algorithm for Stochastic Optimal Control.” The International Journal of Robotics Research, vol. 35, no. 4, Apr. 2016, pp. 305–33.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.contributor.mitauthorHuynh, Vu Anh
dc.contributor.mitauthorKaraman, Sertac
dc.contributor.mitauthorFrazzoli, Emilio
dc.relation.journalThe International Journal of Robotics Researchen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2018-03-22T17:22:42Z
dspace.orderedauthorsHuynh, Vu Anh; Karaman, Sertac; Frazzoli, Emilioen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-2225-7275
dc.identifier.orcidhttps://orcid.org/0000-0002-0505-1400
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record