Continuous-Time Distributed Policy Iteration for Multicontroller Nonlinear Systems
Document Type
Article
Date of Original Version
5-1-2021
Abstract
In this article, a novel distributed policy iteration algorithm is established for infinite horizon optimal control problems of continuous-time nonlinear systems. In each iteration of the developed distributed policy iteration algorithm, only one controller's control law is updated and the other controllers' control laws remain unchanged. The main contribution of the present algorithm is to improve the iterative control law one by one, instead of updating all the control laws in each iteration of the traditional policy iteration algorithms, which effectively releases the computational burden in each iteration. The properties of distributed policy iteration algorithm for continuous-time nonlinear systems are analyzed. The admissibility of the present methods has also been analyzed. Monotonicity, convergence, and optimality have been discussed, which show that the iterative value function is nonincreasingly convergent to the solution of the Hamilton-Jacobi-Bellman equation. Finally, numerical simulations are conducted to illustrate the effectiveness of the proposed method.
Publication Title, e.g., Journal
IEEE Transactions on Cybernetics
Volume
51
Issue
5
Citation/Publisher Attribution
Wei, Qinglai, Hongyang Li, Xiong Yang, and Haibo He. "Continuous-Time Distributed Policy Iteration for Multicontroller Nonlinear Systems." IEEE Transactions on Cybernetics 51, 5 (2021): 2372-2383. doi: 10.1109/TCYB.2020.2979614.