Optimized Leader‐Follower Formation Fault‐Tolerant Control Using Reinforcement Learning for a Class of Nonlinear Multi‐Agent Systems Having Actuator Failure
Abstract
ABSTRACT This work aims to address the optimized formation fault‐tolerant control issue by utilizing reinforcement learning (RL) for the single integral dynamic multi‐agent system (MAS) having actuator faults. Because actuator faults have a direct effect on system performance and stability, it is essential to take the fault‐tolerant mechanism as a design principle of nonlinear system control. Especially in the optimal control of MAS, actuator faults frequently occur due to real‐time information exchange and high control performance requirements. To address the problem, the distributed RL and adaptive estimation are combined, where the RL algorithm is used to generate the optimized formation control protocol, and the adaptive learning is used to estimate the time‐varying efficiency factor and bias signal in the faulty actuator model. Finally, it is demonstrated through theory and simulation that the proposed optimized control has the fault‐tolerant capability and ensures system stability.
Related Papers
- → A Deep Reinforcement Learning Strategy for UAV Autonomous Landing on a Platform(2022)15 cited
- → Sensor Path Planning Using Reinforcement Learning(2020)17 cited
- → Performance of distributed multi-agent multi-state reinforcement spectrum management using different exploration schemes(2013)13 cited
- → Cost-Efficient Reinforcement Learning for Optimal Trade Execution on Dynamic Market Environment(2022)2 cited
- → Classification without gradients: multi-agent reinforcement learning approach to optimization (Conference Presentation)(2023)