Show simple item record

dc.contributor.authorNguyen, Quoc Phong
dc.contributor.authorLow, Bryan Kian Hsiang
dc.contributor.authorJaillet, Patrick
dc.date.accessioned2018-01-12T19:51:36Z
dc.date.available2018-01-12T19:51:36Z
dc.date.issued2015-12
dc.identifier.isbn9781510825024
dc.identifier.urihttp://hdl.handle.net/1721.1/113094
dc.description.abstractExisting inverse reinforcement learning (IRL) algorithms have assumed each expert’s demonstrated trajectory to be produced by only a single reward function. This paper presents a novel generalization of the IRL problem that allows each trajectory to be generated by multiple locally consistent reward functions, hence catering to more realistic and complex experts’ behaviors. Solving our generalized IRL problem thus involves not only learning these reward functions but also the stochastic transitions between them at any state (including unvisited states). By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories. As a result, the most likely partition of a trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived. Empirical evaluation on synthetic and real-world datasets shows that our IRL algorithm outperforms the state-of-the-art EM clustering with maximum likelihood IRL, which is, interestingly, a reduced variant of our approach.en_US
dc.description.sponsorshipAdvances in Neural Information Processing Systems 28 (NIPS 2015)en_US
dc.language.isoen_US
dc.publisherNeural Information Processing Systems Foundationen_US
dc.relation.isversionofhttps://papers.nips.cc/paper/5882-inverse-reinforcement-learning-with-locally-consistent-reward-functionsen_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceNeural Information Processing Systems (NIPS)en_US
dc.titleInverse reinforcement learning with locally consistent reward functionsen_US
dc.typeArticleen_US
dc.identifier.citationNguyen, Quoc Phong. Bryan Kian Hsiang Low, and Patrick Jaillet. "Inverse Reinforcement Learning with Locally Consistent Reward Functions." Advances in Neural Information Processing Systems 28 (NIPS 2015), 4-9 December 2015, Long Beach, CA, NIPS, 2015.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.contributor.mitauthorJaillet, Patrick
dc.relation.journalAdvances in Neural Information Processing Systems 28 (NIPS 2015)en_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dspace.orderedauthorsNguyen, Quoc Phong; Low, Bryan Kian Hsiang; Jaillet, Patricken_US
dspace.embargo.termsNen_US
dc.identifier.orcidhttps://orcid.org/0000-0002-8585-6566
mit.licensePUBLISHER_POLICYen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record