Show simple item record

dc.contributor.authorRyou, Gilhyun
dc.contributor.authorWang, Geoffrey
dc.contributor.authorKaraman, Sertac
dc.date.accessioned2026-03-04T15:32:08Z
dc.date.available2026-03-04T15:32:08Z
dc.date.issued2025-08-22
dc.identifier.urihttps://hdl.handle.net/1721.1/165006
dc.description.abstractHigh-speed online trajectory planning for UAVs poses a significant challenge due to the need for precise modeling of complex dynamics while also being constrained by computational limitations. This paper presents a multi-fidelity reinforcement learning method (MFRL) that aims to effectively create a realistic dynamics model and simultaneously train a planning policy that can be readily deployed in real-time applications. The proposed method involves the co-training of a planning policy and a reward estimator; the latter predicts the performance of the policy’s output and is trained efficiently through multi-fidelity Bayesian optimization. This optimization approach models the correlation between different fidelity levels, thereby constructing a high-fidelity model based on a low-fidelity foundation, which enables the accurate development of the reward model with limited high-fidelity experiments. The framework is further extended to include real-world flight experiments in reinforcement learning training, allowing the reward model to precisely reflect real-world constraints and broadening the policy’s applicability to real-world scenarios. We present rigorous evaluations by training and testing the planning policy in both simulated and real-world environments. The resulting trained policy not only generates faster and more reliable trajectories compared to the baseline snap minimization method, but it also achieves trajectory updates in 2 ms on average, while the baseline method takes several minutes.en_US
dc.language.isoen
dc.publisherSAGE Publicationsen_US
dc.relation.isversionofhttps://doi.org/10.1177/02783649251364393en_US
dc.rightsCreative Commons Attribution-Noncommercialen_US
dc.rights.urihttps://creativecommons.org/licenses/by-nc/4.0/en_US
dc.sourceSAGE Publicationsen_US
dc.titleMulti-fidelity reinforcement learning for time-optimal quadrotor re-planningen_US
dc.typeArticleen_US
dc.identifier.citationRyou G, Wang G, Karaman S. Multi-fidelity reinforcement learning for time-optimal quadrotor re-planning. The International Journal of Robotics Research. 2025;0(0).en_US
dc.contributor.departmentMassachusetts Institute of Technology. Laboratory for Information and Decision Systemsen_US
dc.relation.journalThe International Journal of Robotics Researchen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2026-03-04T15:27:15Z
dspace.orderedauthorsRyou, G; Wang, G; Karaman, Sen_US
dspace.date.submission2026-03-04T15:27:18Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record