Link Search Menu Expand Document

Multiagent Rollout and Policy Iteration for POMDP with Application to Multi-Robot Repair Problems

Paper PDF

Authors

Sushmita Bhattacharya (Harvard University)*; Siva Kailas (Arizona State University); Sahil Badyal (Arizona State University); Stephanie Gil (Harvard University); Dimitri Bertsekas (Massachusetts Institute of Technology (MIT))

Interactive Session

2020-11-16, 11:50 - 12:20 PST | PheedLoop Session

Abstract

In this paper we consider infinite horizon discounted dynamic programming problems with finite state and control spaces, partial state observations, and a multiagent structure. We discuss and compare algorithms that simultaneously or sequentially optimize the agents’ controls by using multistep lookahead, truncated rollout with a known base policy, and a terminal cost function approximation. Our methods specifically address the computational challenges of partially observable multiagent problems. In particular: 1) We consider rollout algorithms that dramatically reduce required computation while preserving the key cost improvement property of the standard rollout method. The per-step computational requirements for our methods are on the order of O(Cm) as compared with O(C^m) for standard rollout, where C is the maximum cardinality of the constraint set for the control component of each agent, and m is the number of agents. 2) We show that our methods can be applied to challenging problems with a graph structure, including a class of robot repair problems whereby multiple robots collaboratively inspect and repair a system under partial information. 3) We provide a simulation study that compares our methods with existing methods, and demonstrate that our methods can handle larger and more complex partially observable multiagent problems (state space size 1E37 and control space size 1E7, respectively). In particular, we verify experimentally that our multiagent rollout methods perform nearly as well as standard rollout for problems with few agents, and produce satisfactory policies for problems with a larger number of agents that are intractable by standard rollout and other state of the art methods. Finally, we incorporate our multiagent rollout algorithms as building blocks in an approximate policy iteration scheme, where successive rollout policies are approximated by using neural network classifiers. While this scheme requires a strictly off-line implementation, it works well in our computational experiments and produces additional significant performance improvement over the single online rollout iteration method.

Video

Reviews and Rebuttal

Reviews & Rebuttal


Conference on Robot Learning 2020