Show simple item record

dc.contributor.authorReist, Philipp
dc.contributor.authorPreiswerk, Pascal
dc.contributor.authorTedrake, Russell L
dc.date.accessioned2020-03-26T13:56:13Z
dc.date.available2020-03-26T13:56:13Z
dc.date.issued2016-07-11
dc.identifier.issn0278-3649
dc.identifier.issn1741-3176
dc.identifier.urihttps://hdl.handle.net/1721.1/124352
dc.description.abstractThe paper presents the simulation-based variant of the LQR-tree feedback-motion-planning approach. The algorithm generates a control policy that stabilizes a nonlinear dynamic system from a bounded set of initial conditions to a goal. This policy is represented by a tree of feedback-stabilized trajectories. The algorithm explores the bounded set with random state samples and, where needed, adds new trajectories to the tree using motion planning. Simultaneously, the algorithm approximates the funnel of a trajectory, which is the set of states that can be stabilized to the goal by the trajectory's feedback policy. Generating a control policy that stabilizes the bounded set to the goal is equivalent to adding trajectories to the tree until their funnels cover the set. In previous work, funnels are approximated with sums-of-squares verification. Here, funnels are approximated by sampling and falsification by simulation, which allows the application to a broader range of systems and a straightforward enforcement of input and state constraints. A theoretical analysis shows that, in the long run, the algorithm tends to improve the coverage of the bounded set as well as the funnel approximations. Focusing on the practical application of the method, a detailed example implementation is given that is used to generate policies for two example systems. Simulation results support the theoretical findings, while experiments demonstrate the algorithm's state-constraints capability, and applicability to highly-dynamic systems. Keywords: Feedback motion-planning; random sampling; feedback policy; nonlinear dynamic system; trajectory libraryen_US
dc.description.sponsorshipETH (Research Grant ETH-31 11-1)en_US
dc.language.isoen
dc.publisherSAGE Publicationsen_US
dc.relation.isversionofhttp://dx.doi.org/10.1177/0278364916647192en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourceMIT web domainen_US
dc.titleFeedback-motion-planning with simulation-based LQR-treesen_US
dc.typeArticleen_US
dc.identifier.citationReist, Philipp et al. "Feedback-motion-planning with simulation-based LQR-trees." International Journal of Robotics Research 35, 11 (July 2016): 1393-1416. 2016 The Author(s).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.relation.journalInternational Journal of Robotics Researchen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-07-11T13:04:01Z
dspace.date.submission2019-07-11T13:04:02Z
mit.journal.volume35en_US
mit.journal.issue11en_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record