Link Search Menu Expand Document

Towards Autonomous Eye Surgery by Combining Deep Imitation Learning with Optimal Control

Paper PDF

Authors

Ji Woong Kim (Johns Hopkins University)*; Peiyao Zhang (Johns Hopkins University); Peter Gehlbach (Johns Hopkins Hospital,); Iulian Iordachita (The Johns Hopkins University); Marin Kobilarov (Johns Hopkins University)

Interactive Session

2020-11-18, 11:10 - 11:40 PST | PheedLoop Session

Abstract

During retinal microsurgery, precise manipulation of the delicate retinal tissue is required for positive surgical outcome. However, accurate manipulation and navigation of surgical tools remain difficult due to a constrained workspace and the top-down view during the surgery, which limits the surgeon’s ability to estimate depth. To alleviate such difficulty, we propose to automate the tool-navigation task by learning to predict relative goal position on the retinal surface from the current tool-tip position. Given an estimated target on the retina, we generate an optimal trajectory leading to the predicted goal while imposing safety-related physical constraints aimed to minimize tissue damage. As an extended task, we generate goal predictions to various points across the retina to localize eye geometry and further generate safe trajectories within the estimated confines. Through experiments in both simulation and with several eye phantoms, we demonstrate that our framework can permit navigation to various points on the retina within 0.089mm and 0.118mm in xy error which is less than the human’s surgeon mean tremor at the tool-tip of 0.180mm. All safety constraints were fulfilled and the algorithm was robust to previously unseen eyes as well as unseen objects in the scene. Live video demonstration is available here: https://youtu.be/n5j5jCCelXk

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020