Show simple item record

dc.contributor.advisorAnnaswamy, Anuradha
dc.contributor.authorGuha, Anubhav
dc.date.accessioned2023-01-19T18:40:07Z
dc.date.available2023-01-19T18:40:07Z
dc.date.issued2022-09
dc.date.submitted2022-10-05T13:45:57.716Z
dc.identifier.urihttps://hdl.handle.net/1721.1/147247
dc.description.abstractThis paper considers the problem of real-time control and learning in dynamic systems subjected to parametric uncertainties. A combination of Adaptive Control (AC) in the inner loop and a Reinforcement Learning (RL) based policy in the outer loop is proposed such that in real-time the inner-loop model reference adaptive controller contracts the closed-loop dynamics towards a reference system, while the RL in the outerloop directs the overall system towards approximately optimal performance. This AC-RL approach is developed for a class of control affine nonlinear dynamical systems, and employs extensions to systems with multiple equilibrium points, systems with input magnitude constraints, and systems in which a high-order tuner is required for adequate performance. In addition to establishing a stability guarantee with realtime control, the AC-RL controller is also shown to lead to parameter learning with persistent excitation. Numerical validations of all algorithms are carried out using a quadrotor landing task on a moving platform. These results point out the clear advantage of the proposed integrative AC-RL approach.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAC-RL: A Framework for Real-Time Control, Learning & Adaptation
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Mechanical Engineering
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Mechanical Engineering


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record