Show simple item record

dc.contributor.advisorRus, Daniela L.
dc.contributor.authorWrafter, Daniel
dc.date.accessioned2022-01-14T15:02:18Z
dc.date.available2022-01-14T15:02:18Z
dc.date.issued2021-06
dc.date.submitted2021-06-17T20:14:50.020Z
dc.identifier.urihttps://hdl.handle.net/1721.1/139297
dc.description.abstractIn this paper, we present the Autonomous Flight Arcade (AFA), a suite of robust environments for end-to-end control of fixed-wing aircraft and quadcopter drones. These environments are playable by both humans and artificial agents, making them useful for varied tasks including reinforcement learning, imitation learning, and human experiments. Additionally, we show that interpretable policies can be learned through the Neural Circuit Policy architecture on these environments. Finally, we present baselines of both human and AI performance on the Autonomous Flight Arcade environments.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleAutonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record