Notice

This is not the latest version of this item. The latest version can be found at:https://dspace.mit.edu/handle/1721.1/134088.2

Show simple item record

dc.contributor.authorEverett, Michael
dc.contributor.authorLutjens, Bjorn
dc.contributor.authorHow, Jonathan P
dc.date.accessioned2021-10-27T19:58:02Z
dc.date.available2021-10-27T19:58:02Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/1721.1/134088
dc.description.abstractIEEE Deep neural network-based systems are now state-of-the-art in many robotics tasks, but their application in safety-critical domains remains dangerous without formal guarantees on network robustness. Small perturbations to sensor inputs (from noise or adversarial examples) are often enough to change network-based decisions, which was recently shown to cause an autonomous vehicle to swerve into another lane. In light of these dangers, numerous algorithms have been developed as defensive mechanisms from these adversarial inputs, some of which provide formal robustness guarantees or certificates. This work leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst case deviation in input space due to possible adversaries or noise. Moreover, the resulting policy comes with a certificate of solution quality, even though the true state and optimal action are unknown to the certifier due to the perturbations. The approach is demonstrated on a deep Q-network (DQN) policy and is shown to increase robustness to noise and adversaries in pedestrian collision avoidance scenarios, a classic control task, and Atari Pong. This article extends our prior work with new performance guarantees, extensions to other reinforcement learning algorithms, expanded results aggregated across more scenarios, an extension into scenarios with adversarial behavior, comparisons with a more computationally expensive method, and visualizations that provide intuition about the robustness algorithm.
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.isversionof10.1109/TNNLS.2021.3056046
dc.rightsCreative Commons Attribution-Noncommercial-Share Alike
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.sourcearXiv
dc.titleCertifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
dc.typeArticle
dc.relation.journalIEEE Transactions on Neural Networks
dc.eprint.versionOriginal manuscript
dc.type.urihttp://purl.org/eprint/type/ConferencePaper
eprint.statushttp://purl.org/eprint/status/NonPeerReviewed
dc.date.updated2021-04-30T16:21:24Z
dspace.orderedauthorsEverett, M; Lutjens, B; How, JP
dspace.date.submission2021-04-30T16:21:25Z
mit.licenseOPEN_ACCESS_POLICY
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

VersionItemDateSummary

*Selected version