Show simple item record

dc.contributor.advisorRichard Linares.en_US
dc.contributor.authorMiller, Daniel(Daniel Martin)en_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Aeronautics and Astronautics.en_US
dc.date.accessioned2020-09-03T17:45:32Z
dc.date.available2020-09-03T17:45:32Z
dc.date.copyright2020en_US
dc.date.issued2020en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/127071
dc.descriptionThesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, May, 2020en_US
dc.descriptionCataloged from the official PDF of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 101-107).en_US
dc.description.abstractArtificial intelligence is a rapidly developing field that promises to revolutionize spaceflight with greater robotic autonomy and innovative decision making. However, it remains to be determined which applications are best addressed using this new technology. In the coming decades, future spacecraft will be required to possess autonomous guidance and control in the complex, nonlinear dynamical regimes of cis-lunar space. In the realm of trajectory design, current methods struggle with local minima, and searching large solutions spaces. This thesis investigates the use of the Reinforcement Learning (RL) algorithm Proximal Policy Optimization (PPO) for solving low-thrust spacecraft guidance and control problems. First, an agent is trained to complete a 302 day mass-optimal low-thrust transfer between the Earth and Mars. This is accomplished while only providing the agent with information regarding its own state and that of Mars. By comparing these results to those generated by the Evolutionary Mission Trajectory Generator (EMTG), the optimality of the trajectory designed using PPO is assessed. Next, an agent is trained as an onboard regulator capable of correcting state errors and following pre-calculated transfers between libration point orbits. The feasibility of this method is examined by evaluating the agent's ability to correct varying levels of initial state error via Monte Carlo testing. The generalizability of the agent's control solution is appraised on three similar transfers of increasing difficulty not seen during the training process. The results show both the promise of the proposed PPO methodology and its limitations, which are discussed.en_US
dc.description.statementofresponsibilityby Daniel Miller.en_US
dc.format.extent107 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectAeronautics and Astronautics.en_US
dc.titleLow-thrust Spacecraft guidance and control using proximal policy optimizationen_US
dc.typeThesisen_US
dc.description.degreeS.M.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.identifier.oclc1191819251en_US
dc.description.collectionS.M. Massachusetts Institute of Technology, Department of Aeronautics and Astronauticsen_US
dspace.imported2020-09-03T17:45:31Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentAeroen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record