Dynamic motion planning for mobile robots using multimodal sensing and safe corridor constraints
Abstract
This paper proposes a deep reinforcement learning framework for dynamic motion planning that integrates multimodal perception and convex safety corridor constraints. The architecture is structured into three modules: perception, prediction, and planning, leveraging fused LiDAR and RGB-D data for semantic modeling and real-time obstacle classification. A hybrid K-GRU model, combining K-means clustering, Kalman filtering, and gated recurrent units, is introduced for hierarchical prediction of dynamic obstacles. To ensure safe navigation, an ellipsoidal feasible domain derived from confidence intervals is inflated by robot dimensions and approximated with linear constraints to construct a spatiotemporal corridor. These constraints are embedded into a Twin Delayed Deep Deterministic Policy Gradient planner with dual critics and a dynamically weighted cost function for improved safety and trajectory smoothness. Simulated experiments demonstrate the framework’s superior performance in success rate, response time, and path efficiency, outperforming baseline methods in dynamic scenarios such as obstacle avoidance and autonomous parking.