Scaling Multimodal Planning: Using Experience and Informing Discrete Search

Z. Kingston and L. E. Kavraki, “Scaling Multimodal Planning: Using Experience and Informing Discrete Search,” IEEE Transactions on Robotics, pp. 1–19, Aug. 2022.

Abstract

Robotic manipulation is inherently continuous, but typically has an underlying discrete structure, such as if an object is grasped. Many problems like these are multi-modal, such as pick-and-place tasks where every object grasp and placement is a mode. Multi-modal problems require finding a sequence of transitions between modes - for example, a particular sequence of object picks and placements. However, many multi-modal planners fail to scale when motion planning is difficult (e.g., in clutter) or the task has a long horizon (e.g., rearrangement). This work presents solutions for multi-modal scalability in both these areas. For motion planning, we present an experience-based planning framework ALEF which reuses experience from similar modes both online and from training data. For task satisfaction, we present a layered planning approach that uses a discrete lead to bias search towards useful mode transitions, informed by weights over mode transitions. Together, these contributions enable multi-modal planners to tackle complex manipulation tasks that were previously infeasible or inefficient, and provide significant improvements in scenes with high-dimensional robots.

Publisher: http://dx.doi.org/10.1109/TRO.2022.3197080

PDF preprint: http://kavrakilab.org/publications/kingston2022-scaling-mmp.pdf