Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning
Authors
Lucas Manuelli (Massachusetts Institute of Technology)*; Yunzhu Li (MIT); Pete Florence (Google); Russ Tedrake (MIT)
Interactive Session
2020-11-16, 12:30 - 13:00 PST | PheedLoop Session
Abstract
Predictive models have been at the core of many robotic systems, from quadrotors to walking robots. However, it has been challenging to develop and apply such models to practical robotic manipulation due to high-dimensional sensory observations such as images. Previous approaches to learning models in the context of robotic manipulation have either learned whole image dynamics or used autoencoders to learn dynamics in a low-dimensional latent state. In this work, we introduce model-based prediction with self-supervised visual correspondence learning, and show that not only is this indeed possible, but demonstrate that these types of predictive models show compelling performance improvements over alternative methods for vision-based RL with autoencoder-type vision training. Through simulation experiments, we demonstrate that our models provide better generalization precision, particularly in 3D scenes, scenes involving occlusion, and in category-generalization. Additionally, we validate that our method effectively transfers to the real world through hardware experiments. https://sites.google.com/view/keypointsintothefuture.