Monday
Table of contents
- 3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators
- Augmenting GAIL with BC for sample efficient imitation learning
- Chaining Behaviors from Data with Model-Free Reinforcement Learning
- CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation
- ContactNets: Learning Discontinuous Contact Dynamics with Smooth, Implicit Representations
- Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
- Deep Phase Correlation for End-to-End Heterogeneous Sensor Measurements Matching
- Deep Reactive Planning in Dynamic Environments
- Differentiable Logic Layer for Rule Guided Trajectory Prediction
- Diverse Plausible Shape Completions from Ambiguous Depth Images
- Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following
- Generation of Realistic Images for Learning in Simulation using FeatureGAN
- Generative adversarial training of product of policies for robust and adaptive movement primitives
- Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity
- High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
- IV-SLAM: Introspective Vision for Simultaneous Localization and Mapping
- Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning
- Learning Object Manipulation Skills via Approximate State Estimation from Real Videos
- Learning Predictive Models for Ergonomic Control of Prosthetic Devices
- Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning
- Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
- Learning a Decision Module by Imitating Driver’s Control Behaviors
- Learning a natural-language to LTL executable semantic parser for grounded robotics
- Learning an Expert Skill-Space for Replanning Dynamic Quadruped Locomotion over Obstacles
- Learning to Compose Hierarchical Object-Centric Controllers for Robotic Manipulation
- Map-Adaptive Goal-Based Trajectory Prediction
- Modeling Long-horizon Tasks as Sequential Interaction Landscapes
- Multiagent Rollout and Policy Iteration for POMDP with Application to Multi-Robot Repair Problems
- PLOP: Probabilistic poLynomial Objects trajectory Prediction for autonomous driving
- Planning Paths Through Unknown Space by Imagining What Lies Therein
- Policy learning in SE(3) action spaces
- ROLL: Visual Self-Supervised Reinforcement Learning with Object Reasoning
- Range Conditioned Dilated Convolutions for Scale Invariant 3D Object Detection
- Reactive motion planning with probabilisticsafety guarantees
- Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds
- Recovering and Simulating Pedestrians in the Wild
- Relational Learning for Skill Preconditions
- Robust Policies via Mid-Level Visual Representations: An Experimental Study in Manipulation and Navigation
- S3CNet: A Sparse Semantic Scene Completion Network for LiDAR Point Clouds
- SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving
- Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
- Sample-efficient Cross-Entropy Method for Real-time Planning
- Self-Supervised 3D Keypoint Learning for Ego-Motion Estimation
- Sim2Real Transfer for Deep Reinforcement Learning with Stochastic State Transition Delays
- Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians
- Soft Multicopter Control Using Neural Dynamics Identification
- Stein Variational Model Predictive Control
- Task-Relevant Adversarial Imitation Learning
- The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
- Transporter Networks: Rearranging the Visual World for Robotic Manipulation
- Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs
- Unsupervised Monocular Depth Learning in Dynamic Scenes
- Untangling Dense Knots by Learning Task-Relevant Keypoints
- Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter
- f-IRL: Inverse Reinforcement Learning via State Marginal Matching