dc.contributor.author | Amini, Alexander A | |
dc.contributor.author | Gilitschenski, Igor | |
dc.contributor.author | Phillips, Jacob | |
dc.contributor.author | Moseyko, Julia | |
dc.contributor.author | Banerjee, Rohan | |
dc.contributor.author | Karaman, Sertac | |
dc.contributor.author | Rus, Daniela L | |
dc.date.accessioned | 2021-04-12T19:10:37Z | |
dc.date.available | 2021-04-12T19:10:37Z | |
dc.date.issued | 2020-01 | |
dc.identifier.issn | 2377-3766 | |
dc.identifier.issn | 2377-3774 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/130456 | |
dc.description.abstract | In this work, we present a data-driven simulation and training engine capable of learning end-to-end autonomous vehicle control policies using only sparse rewards. By leveraging real, human-collected trajectories through an environment, we render novel training data that allows virtual agents to drive along a continuum of new local trajectories consistent with the road appearance and semantics, each with a different view of the scene. We demonstrate the ability of policies learned within our simulator to generalize to and navigate in previously unseen real-world roads, without access to any human control labels during training. Our results validate the learned policy onboard a full-scale autonomous vehicle, including in previously un-encountered scenarios, such as new roads and novel, complex, near-crash situations. Our methods are scalable, leverage reinforcement learning, and apply broadly to situations requiring effective perception and robust operation in the physical world. | en_US |
dc.language.iso | en | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.relation.isversionof | http://dx.doi.org/10.1109/lra.2020.2966414 | en_US |
dc.rights | Creative Commons Attribution-Noncommercial-Share Alike | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-sa/4.0/ | en_US |
dc.source | MIT web domain | en_US |
dc.title | Learning Robust Control Policies for End-to-End Autonomous Driving From Data-Driven Simulation | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Amini, Alexander et al. "Learning Robust Control Policies for End-to-End Autonomous Driving From Data-Driven Simulation." IEEE Robotics and Automation Letters (April 2020): 5, 2 (January 2020): 1143 - 1150. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Laboratory for Information and Decision Systems | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.relation.journal | IEEE Robotics and Automation Letters | en_US |
dc.eprint.version | Author's final manuscript | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dc.date.updated | 2021-04-07T12:14:16Z | |
dspace.orderedauthors | Amini, A; Gilitschenski, I; Phillips, J; Moseyko, J; Banerjee, R; Karaman, S; Rus, D | en_US |
dspace.date.submission | 2021-04-07T12:14:18Z | |
mit.journal.volume | 5 | en_US |
mit.journal.issue | 2 | en_US |
mit.license | OPEN_ACCESS_POLICY | |
mit.metadata.status | Authority Work and Publication Information Needed | |