Show simple item record

dc.contributor.authorMcGrew, James S.
dc.contributor.authorHow, Jonathan P.
dc.contributor.authorBush, Lawrence
dc.contributor.authorWilliams, Brian Charles
dc.contributor.authorRoy, Nicholas
dc.date.accessioned2011-11-28T21:16:54Z
dc.date.available2011-11-28T21:16:54Z
dc.date.issued2010-09
dc.identifier.issn0731-5090
dc.identifier.issn1533-3884
dc.identifier.urihttp://hdl.handle.net/1721.1/67298
dc.description.abstractUnmanned Aircraft Systems (UAS) have the potential to perform many of the dangerous missions currently own by manned aircraft. Yet, the complexity of some tasks, such as air combat, have precluded UAS from successfully carrying out these missions autonomously. This paper presents a formulation of a level flight, fixed velocity, one-on-one air combat maneuvering problem and an approximate dynamic programming (ADP) approach for computing an efficient approximation of the optimal policy. In the version of the problem formulation considered, the aircraft learning the optimal policy is given a slight performance advantage. This ADP approach provides a fast response to a rapidly changing tactical situation, long planning horizons, and good performance without explicit coding of air combat tactics. The method's success is due to extensive feature development, reward shaping and trajectory sampling. An accompanying fast and e ffective rollout-based policy extraction method is used to accomplish on-line implementation. Simulation results are provided that demonstrate the robustness of the method against an opponent beginning from both off ensive and defensive situations. Flight results are also presented using micro-UAS own at MIT's Real-time indoor Autonomous Vehicle test ENvironment (RAVEN).en_US
dc.description.sponsorshipDefense University Research Instrumentation Program (U.S.) (grant number FA9550-07-1-0321)en_US
dc.description.sponsorshipUnited States. Air Force Office of Scientific Research (AFOSR # FA9550-08-1-0086)en_US
dc.description.sponsorshipAmerican Society for Engineering Education (National Defense Science and Engineering Graduate Fellowship)en_US
dc.language.isoen_US
dc.publisherAmerican Institute of Aeronautics and Astronauticsen_US
dc.relation.isversionofhttp://dx.doi.org/10.2514/1.46815en_US
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike 3.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/en_US
dc.sourceMIT web domainen_US
dc.titleAir-Combat Strategy Using Approximate Dynamic Programmingen_US
dc.typeArticleen_US
dc.identifier.citationMcGrew, James S. et al. “Air-Combat Strategy Using Approximate Dynamic Programming.” Journal of Guidance, Control, and Dynamics 33 (2010): 1641-1654.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Aerospace Controls Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratoryen_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Aeronautics and Astronauticsen_US
dc.contributor.approverRoy, Nicholas
dc.contributor.mitauthorRoy, Nicholas
dc.contributor.mitauthorHow, Jonathan P.
dc.contributor.mitauthorBush, Lawrence
dc.contributor.mitauthorWilliams, Brian Charles
dc.contributor.mitauthorMcGrew, James S.
dc.relation.journalJournal of Guidance, Control, and Dynamicsen_US
dc.eprint.versionAuthor's final manuscripten_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dspace.orderedauthorsMcGrew, James S.; How, Jonathan P.; Williams, Brian; Roy, Nicholasen
dc.identifier.orcidhttps://orcid.org/0000-0001-8576-1930
dc.identifier.orcidhttps://orcid.org/0000-0002-1057-3940
dc.identifier.orcidhttps://orcid.org/0000-0002-8293-0492
mit.licenseOPEN_ACCESS_POLICYen_US
mit.metadata.statusComplete


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record