Asymptotically Optimal Stochastic Motion Planning with Temporal Goals

R. Luna, M. Lahijanian, M. Moll, and L. E. Kavraki, “Asymptotically Optimal Stochastic Motion Planning with Temporal Goals,” in Proceedings of the Workshop on the Algorithmic Foundations of Robotics, Istanbul, Turkey, 2014.

Abstract

This work presents a planning framework that allows a robot with stochastic action uncertainty to achieve a high-level task given in the form of a temporal logic formula. The objective is to quickly compute a feedback control policy to satisfy the task specification with maximum probability. A top-down framework is proposed that abstracts the motion of a continuous stochastic system to a discrete, bounded- parameter Markov decision process (bmdp), and then computes a control policy over the product of the bmdp abstraction and a dfa representing the temporal logic specification. Analysis of the framework reveals that as the resolution of the bmdp abstraction becomes finer, the policy obtained converges to optimal. Simulations show that high-quality policies to satisfy complex temporal logic specifications can be obtained in seconds, orders of magnitude faster than existing methods.

Publisher: http://dx.doi.org/10.1007/978-3-319-16595-0_20

PDF preprint: http://kavrakilab.org/publications/luna-lahijanian2014asymptotically-optimal-stochastic.pdf