Show simple item record

dc.contributor.authorSingh, Sumeet
dc.contributor.authorRichards, Spencer M
dc.contributor.authorSindhwani, Vikas
dc.contributor.authorSlotine, Jean-Jacques E
dc.contributor.authorPavone, Marco
dc.date.accessioned2022-09-28T16:12:55Z
dc.date.available2022-01-24T19:13:39Z
dc.date.available2022-09-28T16:12:55Z
dc.date.issued2020
dc.identifier.issn1741-3176
dc.identifier.urihttps://hdl.handle.net/1721.1/139675.2
dc.description.abstract© The Author(s) 2020. We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key contribution is a control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, a constraint which guarantees the existence of robust tracking controllers for arbitrary open-loop trajectories generated with the learned system. Leveraging tools from contraction theory and statistical learning in reproducing kernel Hilbert spaces, we formulate stabilizable dynamics learning as a functional optimization with a convex objective and bi-convex functional constraints. Under a mild structural assumption and relaxation of the functional constraints to sampling-based constraints, we derive the optimal solution with a modified representer theorem. Finally, we utilize random matrix feature approximations to reduce the dimensionality of the search parameters and formulate an iterative convex optimization algorithm that jointly fits the dynamics functions and searches for a certificate of stabilizability. We validate the proposed algorithm in simulation for a planar quadrotor, and on a quadrotor hardware testbed emulating planar dynamics. We verify, both in simulation and on hardware, significantly improved trajectory generation and tracking performance with the control-theoretic regularized model over models learned using traditional regression techniques, especially when learning from small supervised datasets. The results support the conjecture that the use of stabilizability constraints as a form of regularization can help prune the hypothesis space in a manner that is tailored to the downstream task of trajectory generation and feedback control. This produces models that are not only dramatically better conditioned, but also data efficient.en_US
dc.description.sponsorshipNSF CPS program (grant #1931815)en_US
dc.description.sponsorshipKing Abdulaziz City for Science and Technology (KACST)en_US
dc.language.isoen
dc.publisherSAGE Publicationsen_US
dc.relation.isversionofhttps://dx.doi.org/10.1177/0278364920949931en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alikeen_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/en_US
dc.sourcearXiven_US
dc.titleLearning stabilizable nonlinear dynamics with contraction-based regularizationen_US
dc.typeArticleen_US
dc.identifier.citationSingh, Sumeet, Richards, Spencer M, Sindhwani, Vikas, Slotine, Jean-Jacques E and Pavone, Marco. 2020. "Learning stabilizable nonlinear dynamics with contraction-based regularization." International Journal of Robotics Research, 40 (10-11).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineeringen_US
dc.relation.journalInternational Journal of Robotics Researchen_US
dc.eprint.versionOriginal manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2022-01-24T19:10:30Z
dspace.orderedauthorsSingh, S; Richards, SM; Sindhwani, V; Slotine, J-JE; Pavone, Men_US
dspace.date.submission2022-01-24T19:10:33Z
mit.journal.volume40en_US
mit.journal.issue10-11en_US
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusPublication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version