Nicholas Vadivelu (Uber ATG)*; Mengye Ren (Uber ATG, University of Toronto); James Tu (Uber ATG); Jingkang Wang (Uber ATG, University of Toronto); Raquel Urtasun (Uber ATG)
2020-11-18, 12:30 - 13:00 PST | PheedLoop Session
Learned communication makes multi-agent systems more effective by aggregating distributed information. However, it also exposes individual agents to the threat of erroneous messages they receive. In this paper, we study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner. Despite a huge performance boost when the agents solve the task together, the gain is quickly diminished in the presence of pose noise since the communication relies on spatial transformations. Hence, we propose a novel neural reasoning framework that learns to communicate, to estimate errors, and finally, to reach a consensus about those errors. Experiments confirm that our proposed framework significantly improves the robustness of multi-agent self-driving perception systems under realistic and severe localization noise.