Visual Imitation Made Easy
Authors
Sarah Young (UC Berkeley)*; Dhiraj Gandhi (Carnegie Mellon University); Shubham Tulsiani (Facebook AI Research); Abhinav Gupta (CMU/FAIR); Pieter Abbeel (UC Berkeley); Lerrel Pinto (NYU/Berkeley)
Interactive Session
2020-11-18, 12:30 - 13:00 PST | PheedLoop Session
Abstract
Visual imitation learning provides a framework for learning complex manipulation behaviors by leveraging human demonstrations. However, current interfaces for imitation such as kinesthetic teaching or teleoperation prohibitively restrict our ability to efficiently collect large-scale data in the wild. Obtaining such diverse demonstration data is paramount for the generalization of learned skills to novel scenarios. In this work, we present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots. We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot’s end-effector. To extract action information from these visual demonstrations, we use off-the-shelf Structure from Motion (SfM) techniques in addition to training a finger detection network. We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task. For both tasks, we use standard behavior cloning to learn executable policies from the previously collected offline demonstrations. To improve learning performance, we employ a variety of data augmentations and provide an extensive analysis of its effects. Finally, we demonstrate the utility of our interface by evaluating on real robotic scenarios with previously unseen objects and achieve a 87% success rate on pushing and a 62% success rate on stacking. Robot videos are available at our project website: https://sites.google.com/view/visual-imitation-made-easy.