Reinforcement learning for robust adaptive control of partially unknown nonlinear systems subject to unmatched uncertainties
Date of Original Version
This paper proposes a novel robust adaptive control strategy for partially unknown continuous-time nonlinear systems subject to unmatched uncertainties. Initially, the robust nonlinear control problem is converted into a nonlinear optimal control problem by constructing an appropriate value function for the auxiliary system. After that, within the framework of reinforcement learning, an identifier-critic architecture is developed. The presented architecture uses two neural networks: the identifier neural network (INN) which aims at estimating the unknown internal dynamics and the critic neural network (CNN) which tends to derive the approximate solution of the Hamilton-Jacobi-Bellman equation arising in the obtained optimal control problem. The INN is updated by using both the back-propagation algorithm and the e-modification technique. Meanwhile, the CNN is updated via the modified gradient descent method, which uses historical and current state data simultaneously. Based on the classic Lyapunov technique, all the signals in the closed-loop auxiliary system are proved to be uniformly ultimately bounded. Moreover, the original system is kept asymptotically stable under the obtained approximate optimal control. Finally, two illustrative examples, including the F-16 aircraft plant, are provided to demonstrate the effectiveness of the developed method.
Yang, Xiong, Haibo He, Qinglai Wei, and Biao Luo. "Reinforcement learning for robust adaptive control of partially unknown nonlinear systems subject to unmatched uncertainties." Information Sciences 463-464, (2018): 307-322. doi:10.1016/j.ins.2018.06.022.