dc.contributor.author | Rodriguez-Ramos, Alejandro | |
dc.contributor.author | Alvarez-Fernandez, Adrian | |
dc.contributor.author | Bavle, Hriday | |
dc.contributor.author | Campoy, Pascual | |
dc.contributor.author | How, Jonathan P | |
dc.date.accessioned | 2020-05-27T18:39:41Z | |
dc.date.available | 2020-05-27T18:39:41Z | |
dc.date.issued | 2019-11-04 | |
dc.date.submitted | 2019-08 | |
dc.identifier.issn | 1424-8220 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/125512 | |
dc.description.abstract | Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights). Keywords: multirotor; UAV; following; synthetic learning; reinforcement learning; deep learning | en_US |
dc.publisher | Multidisciplinary Digital Publishing Institute | en_US |
dc.relation.isversionof | 10.3390/s19214794 | en_US |
dc.rights | Creative Commons Attribution | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Multidisciplinary Digital Publishing Institute | en_US |
dc.title | Vision-based multirotor following using synthetic learning techniques | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Rodriguez-Ramos, Alejandro, Adrian Alvarez-Fernandez, Hriday Bavle, Pascual Campoy, and Jonathan P. How, "Vision-based multirotor following using synthetic learning techniques." Sensors 19, 21 (Nov. 2019): no. 4794 doi 10.3390/s19214794 ©2019 Author(s) | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Aerospace Controls Laboratory | en_US |
dc.relation.journal | Sensors | en_US |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dc.date.updated | 2020-03-02T12:58:19Z | |
dspace.date.submission | 2020-03-02T12:58:19Z | |
mit.journal.volume | 19 | en_US |
mit.journal.issue | 21 | en_US |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Complete | |