Combining a POMDP Abstraction with Replanning to Solve Complex, Position-Dependent Sensing Tasks

D. K. Grady, M. Moll, and L. E. Kavraki, “Combining a POMDP Abstraction with Replanning to Solve Complex, Position-Dependent Sensing Tasks,” in Proceedings of the AAAI Fall Symposium, Arlington, Virginia, 2013.

Abstract

The Partially-Observable Markov Decision Process (POMDP) is a general framework to determine reward-maximizing action policies under noisy action and sensing conditions. However, determining an optimal policy for POMDPs is often intractable for robotic tasks due to the PSPACE-complete nature of the computation required. Several recent solvers have been introduced that expand the size of problems that can be considered. Although these POMDP solvers can respect complex motion constraints in theory, we show that the computational cost does not provide a benefit in the eventual online execution, compared to our alternative approach that relies on a policy that ignores some of the motion constraints. We advocate using the POMDP framework where it is critical – to find a policy that provides the optimal action given all past noisy sensor observations, while abstracting some of the motion constraints to reduce solution time. However, the actions of an abstract robot are generally not executable under its true motion constraints. The problem is addressed offline with a less-constrained POMDP, and navigation under the full system constraints is handled online with replanning. We empirically demonstrate that the policy generated using this abstracted motion model is faster to compute and achieves similar or higher reward than addressing the motion constraints for a car-like robot as used in our experiments directly in the POMDP.

Publisher: https://www.aaai.org/ocs/index.php/FSS/FSS13/paper/view/7578

PDF preprint: http://kavrakilab.org/publications/grady-moll2013combining-pomdp-abstraction.pdf