Link Search Menu Expand Document

Action-based Representation Learning for Autonomous Driving

Paper PDF Code

Authors

Yi Xiao (CVC & UAB)*; Felipe Codevilla (MILA); Christopher Pal (École Polytechnique de Montréal ); Antonio Lopez (CVC & UAB)

Interactive Session

2020-11-17, 12:30 - 13:00 PST | PheedLoop Session

Abstract

Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020