Manifold-based reinforcement learning via locally linear reconstruction
Document Type
Article
Date of Original Version
4-1-2017
Abstract
Feature representation is critical not only for pattern recognition tasks but also for reinforcement learning (RL) methods to solve learning control problems under uncertainties. In this paper, a manifold-based RL approach using the principle of locally linear reconstruction (LLR) is proposed for Markov decision processes with large or continuous state spaces. In the proposed approach, an LLR-based feature learning scheme is developed for value function approximation in RL, where a set of smooth feature vectors is generated by preserving the local approximation properties of neighboring points in the original state space. By using the proposed feature learning scheme, an LLR-based approximate policy iteration (API) algorithm is designed for learning control problems with large or continuous state spaces. The relationship between the value approximation error of a new data point and the estimated values of its nearest neighbors is analyzed. In order to compare different feature representation and learning approaches for RL, a comprehensive simulation and experimental study was conducted on three benchmark learning control problems. It is illustrated that under a wide range of parameter settings, the LLR-based API algorithm can obtain better learning control performance than the previous API methods with different feature representation schemes.
Publication Title, e.g., Journal
IEEE Transactions on Neural Networks and Learning Systems
Volume
28
Issue
4
Citation/Publisher Attribution
Xu, Xin, Zhenhua Huang, Lei Zuo, and Haibo He. "Manifold-based reinforcement learning via locally linear reconstruction." IEEE Transactions on Neural Networks and Learning Systems 28, 4 (2017): 934-947. doi: 10.1109/TNNLS.2015.2505084.