Event-Triggered Optimal Neuro-Controller Design with Reinforcement Learning for Unknown Nonlinear Systems
Date of Original Version
This paper develops an optimal control scheme for continuous-time unknown nonlinear systems using the event-triggering mechanism. Different from designing controllers using the time-triggering mechanism, the event-triggered controller is updated only when the system state deviates more than a certain threshold from a prescribed value. To obtain the event-triggered optimal controller, we develop an identifier-critic architecture under the framework of reinforcement learning. The identifier network, composed of a feedforward neural network (FNN), aims to derive the knowledge of unknown system dynamics, and the critic network, constituted of an FNN, intends to derive the event-triggered optimal controller. The identifier network is tuned via the combination of a standard back-propagation algorithm and an e-modification method, and the critic network is updated using a modification of the gradient descent method. By introducing an additional stability term to update the critic network, the initial admissible control is no longer required. Meanwhile, by using historical and instantaneous state data together, the persistence of excitation condition is relaxed. A stability analysis of the closed-loop system is provided based on the Lyapunov method. The effectiveness of the proposed designs is illustrated through simulations of a nonlinear example and a single link robot arm system.
IEEE Transactions on Systems, Man, and Cybernetics: Systems
Yang, Xiong, Haibo He, and Derong Liu. "Event-Triggered Optimal Neuro-Controller Design with Reinforcement Learning for Unknown Nonlinear Systems." IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, 9 (2019): 1866-1878. doi:10.1109/TSMC.2017.2774602.