Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning
Author(s)
Kusari, Arpan; How, Jonathan P
DownloadAccepted version (684.5Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2020 IEEE. A common approach for defining a reward function for multi-objective reinforcement learning (MORL) problems is the weighted sum of the multiple objectives. The weights are then treated as design parameters dependent on the expertise (and preference) of the person performing the learning, with the typical result that a new solution is required for any change in these settings. This paper investigates the relationship between the reward function and the optimal value function for MORL; specifically addressing the question of how to approximate the optimal value function well beyond the set of weights for which the optimization problem was actually solved, thereby avoiding the need to recompute for any particular choice. We prove that the value function transforms smoothly given a transformation of weights of the reward function (and thus a smooth interpolation in the policy space). A Gaussian process is used to obtain a smooth interpolation over the reward function weights of the optimal value function for three well-known examples: Gridworld, Objectworld and Pendulum. The results show that the interpolation can provide robust values for sample states and actions in both discrete and continuous domain problems. Significant advantages arise from utilizing this interpolation technique in the domain of autonomous vehicles: easy, instant adaptation of user preferences while driving and true randomization of obstacle vehicle behavior preferences during training.
Date issued
2020-05Department
Massachusetts Institute of Technology. Department of Aeronautics and AstronauticsJournal
Proceedings - IEEE International Conference on Robotics and Automation
Publisher
IEEE
Citation
Kusari, Arpan and How, Jonathan P. 2020. "Predicting optimal value functions by interpolating reward functions in scalarized multi-objective reinforcement learning." Proceedings - IEEE International Conference on Robotics and Automation.
Version: Author's final manuscript