Maneuvering target tracking of UAV based on MN-DDPG and transfer learning
Citations Over TimeTop 1% of 2020 papers
Abstract
Tracking maneuvering target in real time autonomously and accurately in an uncertain environment is one of the challenging missions for unmanned aerial vehicles (UAVs). In this paper, aiming to address the control problem of maneuvering target tracking and obstacle avoidance, an online path planning approach for UAV is developed based on deep reinforcement learning. Through end-to-end learning powered by neural networks, the proposed approach can achieve the perception of the environment and continuous motion output control. This proposed approach includes: (1) A deep deterministic policy gradient (DDPG)-based control framework to provide learning and autonomous decision-making capability for UAVs; (2) An improved method named MN-DDPG for introducing a type of mixed noises to assist UAV with exploring stochastic strategies for online optimal planning; and (3) An algorithm of task-decomposition and pre-training for efficient transfer learning to improve the generalization capability of UAV’s control model built based on MN-DDPG. The experimental simulation results have verified that the proposed approach can achieve good self-adaptive adjustment of UAV’s flight attitude in the tasks of maneuvering target tracking with a significant improvement in generalization capability and training efficiency of UAV tracking controller in uncertain environments.
Related Papers
- → An Object Detection and Pose Estimation Approach for Position Based Visual Servoing(2017)5 cited
- → Self-monitoring to improve robustness of 3D object tracking for robotics(2011)4 cited
- → Tracking in 3D: Image Variability Decomposition for Recovering Object Pose and Illumination(1999)15 cited
- → Foreground object segmentation from binocular stereo video(2005)2 cited
- → 6-DOF object localization by combining monocular vision and robot arm kinematics(2017)1 cited