Link Search Menu Expand Document

Contrastive Variational Reinforcement Learning for Complex Observations

Paper PDF Code

Authors

Xiao Ma (National University of Singapore)*; SIWEI CHEN (National University of Singapore); David Hsu (NUS); Wee Sun Lee (National University of Singapore)

Interactive Session

2020-11-18, 12:30 - 13:00 PST | PheedLoop Session

Abstract

Deep reinforcement learning (DRL) has achieved significant success in various robot tasks: manipulation, navigation, etc. However, complex visual observations in natural environments remains a major challenge. This paper presents Contrastive Variational Reinforcement Learning (CVRL), a model-based method that tackles complex visual observations in DRL. CVRL learns a contrastive variational model by maximizing the mutual information between latent states and observations discriminatively, through contrastive learning. It avoids modeling the complex observation space unnecessarily, as the commonly used generative observation model often does, and is significantly more robust. CVRL achieves comparable performance with state-of-the-art model-based DRL methods on standard Mujoco tasks. It significantly outperforms them on Natural Mujoco tasks and a robot box-pushing task with complex observations, e.g., dynamic shadows. The CVRL code is available publicly at https://github.com/Yusufma03/CVRL.

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020