Adaptive Critic Nonlinear Robust Control: A Survey
Document Type
Article
Date of Original Version
10-1-2017
Abstract
Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.
Publication Title, e.g., Journal
IEEE Transactions on Cybernetics
Volume
47
Issue
10
Citation/Publisher Attribution
Wang, Ding, Haibo He, and Derong Liu. "Adaptive Critic Nonlinear Robust Control: A Survey." IEEE Transactions on Cybernetics 47, 10 (2017): 3429-3451. doi: 10.1109/TCYB.2017.2712188.