Show simple item record

dc.contributor.authorSolar Lezama, Armando
dc.contributor.authorPu, Yewen
dc.contributor.authorBastani, Osbert
dc.date.accessioned2021-11-09T15:50:25Z
dc.date.available2021-11-09T15:50:25Z
dc.date.issued2018
dc.identifier.urihttps://hdl.handle.net/1721.1/137934
dc.description.abstract© 2018 Curran Associates Inc.All rights reserved. While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy.en_US
dc.language.isoen
dc.relation.isversionofhttps://papers.nips.cc/paper/7516-verifiable-reinforcement-learning-via-policy-extractionen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleVerifiable reinforcement learning via policy extractionen_US
dc.typeArticleen_US
dc.identifier.citationSolar Lezama, Armando, Pu, Yewen and Bastani, Osbert. 2018. "Verifiable reinforcement learning via policy extraction."
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2019-07-10T13:38:05Z
dspace.date.submission2019-07-10T13:38:06Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record