Link Search Menu Expand Document

Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments

Paper PDF Code

Authors

Tianchen Ji (University of Illinois at Urbana-Champaign)*; Sri Theja Vuppala (University of Illinois at Urbana-Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Katherine Driggs-Campbell (University of Illinois at Urbana-Champaign)

Interactive Session

2020-11-18, 11:10 - 11:40 PST | PheedLoop Session

Abstract

To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020