Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
Author(s)
Siew, Peng Mun; Linares, Richard
Downloadsensors-22-07847-v2.pdf (1.420Mb)
Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Space situational awareness (SSA) is becoming increasingly challenging with the proliferation of resident space objects (RSOs), ranging from CubeSats to mega-constellations. Sensors within the United States Space Surveillance Network are tasked to repeatedly detect, characterize, and track these RSOs to retain custody and estimate their attitude. The majority of these sensors consist of ground-based sensors with a narrow field of view and must be slew at a finite rate from one RSO to another during observations. This results in a complex combinatorial problem that poses a major obstacle to the SSA sensor tasking problem. In this work, we successfully applied deep reinforcement learning (DRL) to overcome the curse of dimensionality and optimally task a ground-based sensor. We trained several DRL agents using proximal policy optimization and population-based training in a simulated SSA environment. The DRL agents outperformed myopic policies in both objective metrics of RSOs’ state uncertainties and the number of unique RSOs observed over a 90-min observation window. The agents’ robustness to changes in RSO orbital regimes, observation window length, observer’s location, and sensor properties are also examined. The robustness of the DRL agents allows them to be applied to any arbitrary locations and scenarios.
Date issued
2022-10-16Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsPublisher
Multidisciplinary Digital Publishing Institute
Citation
Sensors 22 (20): 7847 (2022)
Version: Final published version