Show simple item record

dc.contributor.authorBoffi, Nicholas M
dc.contributor.authorSlotine, Jean-Jacques E
dc.date.accessioned2022-01-24T19:40:31Z
dc.date.available2022-01-24T19:40:31Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/139677
dc.description.abstractStable concurrent learning and control of dynamical systems is the subject of adaptive control. Despite being an established field with many practical applications and a rich theory, much of the development in adaptive control for nonlinear systems revolves around a few key algorithms. By exploiting strong connections between classical adaptive nonlinear control techniques and recent progress in optimization and machine learning, we show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction. We begin by introducing first-order adaptation laws inspired by natural gradient descent and mirror descent. We prove that when there are multiple dynamics consistent with the data, these non-Euclidean adaptation laws implicitly regularize the learned model. Local geometry imposed during learning thus may be used to select parameter vectors—out of the many that will achieve perfect tracking or prediction—for desired properties such as sparsity. We apply this result to regularized dynamics predictor and observer design, and as concrete examples, we consider Hamiltonian systems, Lagrangian systems, and recurrent neural networks. We subsequently develop a variational formalism based on the Bregman Lagrangian. We show that its Euler Lagrange equations lead to natural gradient and mirror descent-like adaptation laws with momentum, and we recover their first-order analogues in the infinite friction limit. We illustrate our analyses with simulations demonstrating our theoretical results.en_US
dc.language.isoen
dc.publisherMIT Press - Journalsen_US
dc.relation.isversionof10.1162/NECO_A_01360en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceMIT Pressen_US
dc.titleImplicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Predictionen_US
dc.typeArticleen_US
dc.identifier.citationBoffi, Nicholas M and Slotine, Jean-Jacques E. 2021. "Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction." Neural Computation, 33 (3).
dc.contributor.departmentMassachusetts Institute of Technology. Nonlinear Systems Laboratory
dc.relation.journalNeural Computationen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2022-01-24T19:23:52Z
dspace.orderedauthorsBoffi, NM; Slotine, J-JEen_US
dspace.date.submission2022-01-24T19:23:54Z
mit.journal.volume33en_US
mit.journal.issue3en_US
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record