Link Search Menu Expand Document

Learning 3D Dynamic Scene Representations for Robot Manipulation

Paper PDF Code

Authors

Zhenjia Xu (Columbia University)*; Zhanpeng He (Columbia University); Jiajun Wu (Stanford University); Shuran Song (Columbia University)

Interactive Session

2020-11-17, 11:50 - 12:20 PST | PheedLoop Session

Abstract

3D scene representation for robot manipulation should capture three key object properties: permanency – objects that become occluded over time continue to exist; amodal completeness – objects have 3D occupancy, even if only partial observations are available; spatiotemporal continuity – the movement of each object is continuous over space and time. In this paper, we introduce 3D Dynamic Scene Representation (DSR), a 3D volumetric scene representation that simultaneously discovers, tracks, reconstructs objects, and predicts their dynamics while capturing all three properties. We further propose DSR-Net, which learns to aggregate visual observations over multiple interactions to gradually build and refine DSR. Our model achieves state-of-the-art performance in modeling 3D scene dynamics with DSR on both simulated and real data. Combined with model predictive control, DSR-Net enables accurate planning in downstream robotic manipulation tasks such as planar pushing. Code and data are available at dsr-net.cs.columbia.edu.

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020