Link
Search
Menu
Expand
Document
CoRL 2020 Papers
All Papers
Monday
3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators
Augmenting GAIL with BC for sample efficient imitation learning
Chaining Behaviors from Data with Model-Free Reinforcement Learning
CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation
ContactNets: Learning Discontinuous Contact Dynamics with Smooth, Implicit Representations
Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Deep Phase Correlation for End-to-End Heterogeneous Sensor Measurements Matching
Deep Reactive Planning in Dynamic Environments
Differentiable Logic Layer for Rule Guided Trajectory Prediction
Diverse Plausible Shape Completions from Ambiguous Depth Images
Few-shot Object Grounding and Mapping for Natural Language Robot Instruction Following
Generation of Realistic Images for Learning in Simulation using FeatureGAN
Generative adversarial training of product of policies for robust and adaptive movement primitives
Harnessing Distribution Ratio Estimators for Learning Agents with Quality and Diversity
High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
IV-SLAM: Introspective Vision for Simultaneous Localization and Mapping
Keypoints into the Future: Self-Supervised Correspondence in Model-Based Reinforcement Learning
Learning Object Manipulation Skills via Approximate State Estimation from Real Videos
Learning Predictive Models for Ergonomic Control of Prosthetic Devices
Learning Trajectories for Visual-Inertial System Calibration via Model-based Heuristic Deep Reinforcement Learning
Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Learning a Decision Module by Imitating Driver’s Control Behaviors
Learning a natural-language to LTL executable semantic parser for grounded robotics
Learning an Expert Skill-Space for Replanning Dynamic Quadruped Locomotion over Obstacles
Learning to Compose Hierarchical Object-Centric Controllers for Robotic Manipulation
Map-Adaptive Goal-Based Trajectory Prediction
Modeling Long-horizon Tasks as Sequential Interaction Landscapes
Multiagent Rollout and Policy Iteration for POMDP with Application to Multi-Robot Repair Problems
PLOP: Probabilistic poLynomial Objects trajectory Prediction for autonomous driving
Planning Paths Through Unknown Space by Imagining What Lies Therein
Policy learning in SE(3) action spaces
ROLL: Visual Self-Supervised Reinforcement Learning with Object Reasoning
Range Conditioned Dilated Convolutions for Scale Invariant 3D Object Detection
Reactive motion planning with probabilisticsafety guarantees
Reconfigurable Voxels: A New Representation for LiDAR-Based Point Clouds
Recovering and Simulating Pedestrians in the Wild
Relational Learning for Skill Preconditions
Robust Policies via Mid-Level Visual Representations: An Experimental Study in Manipulation and Navigation
S3CNet: A Sparse Semantic Scene Completion Network for LiDAR Point Clouds
SMARTS: An Open-Source Scalable Multi-Agent RL Training School for Autonomous Driving
Same Object, Different Grasps: Data and Semantic Knowledge for Task-Oriented Grasping
Sample-efficient Cross-Entropy Method for Real-time Planning
Self-Supervised 3D Keypoint Learning for Ego-Motion Estimation
Sim2Real Transfer for Deep Reinforcement Learning with Stochastic State Transition Delays
Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians
Soft Multicopter Control Using Neural Dynamics Identification
Stein Variational Model Predictive Control
Task-Relevant Adversarial Imitation Learning
The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation
Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Universal Embeddings for Spatio-Temporal Tagging of Self-Driving Logs
Unsupervised Monocular Depth Learning in Dynamic Scenes
Untangling Dense Knots by Learning Task-Relevant Keypoints
Volumetric Grasping Network: Real-time 6 DOF Grasp Detection in Clutter
f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Tuesday
A Long Horizon Planning Framework for Manipulating Rigid Pointcloud Objects
ACNMP: Skill Transfer and Task Extrapolation through Learning from Demonstration and Reinforcement Learning via Representation Sharing
Action-based Representation Learning for Autonomous Driving
Assisted Perception: Optimizing Observations to Communicate State
Attention-Privileged Reinforcement Learning
Attentional Separation-and-Aggregation Network for Self-supervised Depth-Pose Learning in Dynamic Scenes
BayesRace: Learning to race autonomously using prior experience
CAMPs: Learning Context-Specific Abstractions for Efficient Planning in Factored MDPs
DIRL: Domain-Invariant Representation Learning for Sim-to-Real Transfer
DROGON: A Trajectory Prediction Model based on Intention-Conditioned Behavior Reasoning
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
DeepMPCVS: Deep Model Predictive Control for Visual Servoing
EXI-Net: EXplicitly/Implicitly Conditioned Network for Multiple Environment Sim-to-Real Transfer
Flightmare: A Flexible Quadrotor Simulator
From pixels to legs: Hierarchical learning of quadruped locomotion
GDN: A Coarse-To-Fine (C2F) Representation for End-To-End 6-DoF Grasp Detection
Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions
Incremental learning of EMG-based control commands using Gaussian Processes
Interactive Imitation Learning in State-Space
Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting
Iterative Semi-parametric Dynamics Model Learning For Autonomous Racing
Learning 3D Dynamic Scene Representations for Robot Manipulation
Learning Certified Control Using Contraction Metric
Learning Dexterous Manipulation from Suboptimal Experts
Learning Equality Constraints for Motion Planning on Manifolds
Learning Hierarchical Task Networks with Preferences from Unannotated Demonstrations
Learning Hybrid Control Barrier Functions from Data
Learning Latent Representations to Influence Multi-Agent Interaction
Learning Obstacle Representations for Neural Motion Planning
Learning Stability Certificates from Data
Learning Vision-based Reactive Policies for Obstacle Avoidance
Learning from Suboptimal Demonstration via Self-Supervised Reward Regression
Learning hierarchical relationships for object-goal navigation
Learning to Improve Multi-Robot Hallway Navigation
Learning to Walk in the Real World with Minimal Human Effort
LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion
MATS: An Interpretable Trajectory Forecasting Representation for Planning and Control
Motion Planner Augmented Reinforcement Learning for Robot Manipulation in Obstructed Environments
Multi-Level Structure vs. End-to-End-Learning in High-Performance Tactile Robotic Manipulation
MultiPoint: Cross-spectral registration of thermal and optical aerial imagery
One Thousand and One Hours: Self-driving Motion Prediction Dataset
PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
Positive-Unlabeled Reward Learning
Robust Quadrupedal Locomotion on Sloped Terrains: A Linear Policy Approach
SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning
STReSSD: Sim-To-Real from Sound for Stochastic Dynamics
Safe Optimal Control Using Stochastic Barrier Functions and Deep Forward-Backward SDEs
Safe Policy Learning for Continuous Control
Sampling-based Reachability Analysis: A Random Set Theory Approach with Adversarial Sampling
SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation
The EMPATHIC Framework for Task Learning from Implicit Human Feedback
The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning
Time-Bounded Mission Planning in Time-Varying Domains with Semi-MDPs and Gaussian Processes
Uncertainty-Aware Constraint Learning for Adaptive Safe Motion Planning from Demonstrations
Visual Localization and Mapping with Hybrid SFA
Wednesday
A User’s Guide to Calibrating Robotic Simulators
Accelerating Reinforcement Learning with Learned Skill Priors
Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning
Amodal 3D Reconstruction for Robotic Manipulation via Stability and Connectivity
Asynchronous Deep Model Reference Adaptive Control
Auxiliary Tasks Speed Up Learning PointGoal Navigation
Belief-Grounded Networks for Accelerated Robot Learning under Partial Observability
CLOUD: Contrastive Learning of Unsupervised Dynamics
Contrastive Variational Reinforcement Learning for Complex Observations
Explicitly Encouraging Low Fractional Dimensional Trajectories Via Reinforcement Learning
Exploratory Grasping: Asymptotically Optimal Algorithms for Grasping Challenging Polyhedral Objects
Fast robust peg-in-hole insertion with continuous visual servoing
Fit2Form: 3D Generative Model for Robot Gripper Form Design
Generalization Guarantees for Imitation Learning
Hardware as Policy: Mechanical and Computational Co-Optimization using Deep Reinforcement Learning
Hierarchical Robot Navigation in Novel Environments using Rough 2-D Maps
Integrating Egocentric Localization for More Realistic Point-Goal Navigation Agents
Learning Arbitrary-Goal Fabric Folding with One Hour of Real Robot Experience
Learning Interactively to Resolve Ambiguity in Reference Frame Selection
Learning Object-conditioned Exploration using Distributed Soft Actor Critic
Learning Predictive Representations for Deformable Objects Using Contrastive Estimation
Learning RGB-D Feature Embeddings for Unseen Object Instance Segmentation
Learning a Decentralized Multi-Arm Motion Planner
Learning from Demonstrations using Signal Temporal Logic
Learning rich touch representations through cross-modal self-supervision
Learning to Communicate and Correct Pose Errors
MELD: Meta-Reinforcement Learning from Images via Latent State Models
Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Model-based Reinforcement Learning for Decentralized Multiagent Rendezvous
MuGNet: Multi-Resolution Graph Neural Network for Segmenting Large-Scale Pointclouds
Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Multimodal Trajectory Prediction via Topological Invariance for Navigation at Uncontrolled Intersections
Neuro-Symbolic Program Search for Autonomous Driving Decision Module Design
Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning
PLAS: Latent Action Space for Offline Reinforcement Learning
Probably Approximately Correct Vision-Based Planning using Motion Primitives
Reinforcement Learning with Videos: Combining Offline Observations with Interaction
Robot Action Selection Learning via Layered Dimension Informed Program Synthesis
S3K: Self-Supervised Semantic Keypoints for Robotic Manipulation via Multi-View Consistency
Self-Supervised Learning of Scene-Graph Representations for Robotic Sequential Manipulation Planning
Self-Supervised Object-in-Gripper Segmentation from Robotic Motions
SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks
Sim-to-Real Transfer for Vision-and-Language Navigation
StrObe: Streaming Object Detection from LiDAR Packets
TNT: Target-driveN Trajectory Prediction
Tactile Object Pose Estimation from the First Touch with Geometric Contact Rendering
TartanVO: A Generalizable Learning-based VO
Tolerance-Guided Policy Learning for Adaptable and Transferrable Delicate Industrial Insertion
Towards Autonomous Eye Surgery by Combining Deep Imitation Learning with Optimal Control
Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Towards Robotic Assembly by Predicting Robust, Precise and Task-oriented Grasps
Transformers for One-Shot Visual Imitation
TriFinger: An Open-Source Robot for Learning Dexterity
Unsupervised Metric Relocalization Using Transform Consistency Loss
Visual Imitation Made Easy
Award Nominees 🏆
📄 CoRL Paper Explorer
Welcome to the Corl 2020
Paper Explorer
.
Search for a paper above (using any terms in the author, abstract or title).
See all papers that will be presented on a particular day to the left.