Optimal control for unknown discrete-time nonlinear markov jump systems using adaptive dynamic programming
Document Type
Article
Date of Original Version
12-1-2014
Abstract
In this paper, we develop and analyze an optimal control method for a class of discrete-time nonlinear Markov jump systems (MJSs) with unknown system dynamics. Specifically, an identifier is established for the unknown systems to approximate system states, and an optimal control approach for nonlinear MJSs is developed to solve the Hamilton-Jacobi-Bellman equation based on the adaptive dynamic programming technique. We also develop detailed stability analysis of the control approach, including the convergence of the performance index function for nonlinear MJSs and the existence of the corresponding admissible control. Neural network techniques are used to approximate the proposed performance index function and the control law. To demonstrate the effectiveness of our approach, three simulation studies, one linear case, one nonlinear case, and one single link robot arm case, are used to validate the performance of the proposed optimal control method.
Publication Title, e.g., Journal
IEEE Transactions on Neural Networks and Learning Systems
Volume
25
Issue
12
Citation/Publisher Attribution
Zhong, Xiangnan, Haibo He, Huaguang Zhang, and Zhanshan Wang. "Optimal control for unknown discrete-time nonlinear markov jump systems using adaptive dynamic programming." IEEE Transactions on Neural Networks and Learning Systems 25, 12 (2014): 2141-2155. doi: 10.1109/TNNLS.2014.2305841.