A boundedness theoretical analysis for GrADP design: A case study on maze navigation
Document Type
Conference Proceeding
Date of Original Version
9-28-2015
Abstract
A new theoretical analysis towards the goal representation adaptive dynamic programming (GrADP) design proposed in [1], [2] is investigated in this paper. Unlike the proofs of convergence for adaptive dynamic programming (ADP) in literature, here we provide a new insight for the error bound between the estimated value function and the expected value function. Then we employ the critic network in GrADP approach to approximate the Q value function, and use the action network to provide the control policy. The goal network is adopted to provide the internal reinforcement signal for the critic network over time. Finally, we illustrate that the estimated Q value function is close to the expected value function in an arbitrary small bound on the maze navigation example.
Publication Title, e.g., Journal
Proceedings of the International Joint Conference on Neural Networks
Volume
2015-September
Citation/Publisher Attribution
Ni, Zhen, Xiangnan Zhong, and Haibo He. "A boundedness theoretical analysis for GrADP design: A case study on maze navigation." Proceedings of the International Joint Conference on Neural Networks 2015-September, (2015). doi: 10.1109/IJCNN.2015.7280475.