Space Objects Maneuvering Prediction via Maximum Causal Entropy Inverse Reinforcement Learning
Author(s)
Doerr, Bryce G; Linares, Richard; Furfaro, Roberto
DownloadAccepted version (1.223Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
Inverse Reinforcement Learning (RL) can be used to determine the behavior of Space Objects (SOs) by estimating the reward function that an SO is using for control. The approach discussed in this work can be used to analyze maneuvering of SOs from observational data. The inverse RL problem is solved using maximum causal entropy. This approach determines the optimal reward function that a SO is using while maneuvering with random disturbances by assuming that the observed trajectories are optimal with respect to the SO’s own reward function. Lastly, this paper develops results for scenarios involving Low Earth Orbit (LEO) station-keeping and Geostationary Orbit (GEO) station-keeping.
Date issued
2020Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
AIAA Scitech 2020 Forum
Publisher
American Institute of Aeronautics and Astronautics (AIAA)
Citation
Doerr, Bryce G, Linares, Richard and Furfaro, Roberto. 2020. "Space Objects Maneuvering Prediction via Maximum Causal Entropy Inverse Reinforcement Learning." AIAA Scitech 2020 Forum, 1 PartF.
Version: Author's final manuscript