Show simple item record

dc.contributor.authorGorodetsky, Alex Arkady
dc.contributor.authorKaraman, Sertac
dc.contributor.authorMarzouk, Youssef M
dc.date.accessioned2019-02-11T16:39:16Z
dc.date.available2019-02-11T16:39:16Z
dc.date.issued2018-02
dc.date.submitted2018-03
dc.identifier.issn0278-3649
dc.identifier.issn1741-3176
dc.identifier.urihttp://hdl.handle.net/1721.1/120322
dc.description.abstractMotion planning and control problems are embedded and essential in almost all robotics applications. These problems are often formulated as stochastic optimal control problems and solved using dynamic programming algorithms. Unfortunately, most existing algorithms that guarantee convergence to optimal solutions suffer from the curse of dimensionality: the run time of the algorithm grows exponentially with the dimension of the state space of the system. We propose novel dynamic programming algorithms that alleviate the curse of dimensionality in problems that exhibit certain low-rank structure. The proposed algorithms are based on continuous tensor decompositions recently developed by the authors. Essentially, the algorithms represent high-dimensional functions (e.g. the value function) in a compressed format, and directly perform dynamic programming computations (e.g. value iteration, policy iteration) in this format. Under certain technical assumptions, the new algorithms guarantee convergence towards optimal solutions with arbitrary precision. Furthermore, the run times of the new algorithms scale polynomially with the state dimension and polynomially with the ranks of the value function. This approach realizes substantial computational savings in “compressible” problem instances, where value functions admit low-rank approximations. We demonstrate the new algorithms in a wide range of problems, including a simulated six-dimensional agile quadcopter maneuvering example and a seven-dimensional aircraft perching example. In some of these examples, we estimate computational savings of up to 10 orders of magnitude over standard value iteration algorithms. We further demonstrate the algorithms running in real time on board a quadcopter during a flight experiment under motion capture. Keywords: Stochastic optimal control; motion planning; dynamic programming; tensor decompositionsen_US
dc.description.sponsorshipNational Science Foundation (U.S.) (Grant IIS-1452019)en_US
dc.description.sponsorshipUnited States. Department of Energy (Award DE-SC0007099)en_US
dc.publisherSAGE Publicationsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1177/0278364917753994en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleHigh-dimensional stochastic optimal control using continuous tensor decompositionsen_US
dc.typeArticleen_US
dc.identifier.citationGorodetsky, Alex et al. “High-Dimensional Stochastic Optimal Control Using Continuous Tensor Decompositions.” The International Journal of Robotics Research 37, 2–3 (February 2018): 340–377 © 2018 The Author(s)en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.contributor.mitauthorGorodetsky, Alex Arkady
dc.contributor.mitauthorKaraman, Sertac
dc.contributor.mitauthorMarzouk, Youssef M
dc.relation.journalInternational Journal of Robotics Researchen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2019-02-01T14:28:32Z
dspace.orderedauthorsGorodetsky, Alex; Karaman, Sertac; Marzouk, Youssefen_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0003-3152-8206
dc.identifier.orcidhttps://orcid.org/0000-0002-2225-7275
dc.identifier.orcidhttps://orcid.org/0000-0001-8242-3290
mit.licenseOPEN_ACCESS_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record