Link Search Menu Expand Document

Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning

Paper PDF Code

Authors

Le Chen (ETH Zurich)*; Yunke Ao (ETH Zurich); Florian Tschopp (ETH Zurich); Andrei Cramariuc (ETH Zurich); Michel Breyer (ETH); Jen Jen Chung (ETH Zurich); Roland Siegwart (ETH Zürich, Autonomous Systems Lab); Cesar Cadena (ETH Zurich)

Interactive Session

2020-11-16, 12:30 - 13:00 PST | PheedLoop Session

Abstract

Visual-inertial systems rely on precise calibrations of both camera intrinsics and inter-sensor extrinsics, which typically require manually performing complex motions in front of a calibration target. In this work we present a novel approach to obtain favorable trajectories for visual-inertial system calibration, using model-based deep reinforcement learning. Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm. Our experiments show that while maintaining similar or shorter path lengths, the trajectories generated by our learned policy result in lower calibration errors compared to random or handcrafted trajectories. The code is publicly available.

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020