An high-efficient online reinforcement learning algorithm for continuous-state systems

Document Type

Conference Proceeding

Date of Original Version

3-2-2015

Abstract

In this paper, we consider continuous-state systems and pursue a near-optimal policy through online learning. A new online reinforcement learning algorithm - MSEC (Multi-Samples in Each Cell) is proposed. The proposed algorithm combines state aggregation technique and efficient exploration principle, making high utilization of samples observed online. More concretely, we apply a grid over the continuous state space and partition it into different cells. Then, a near-upper Q iteration operator is defined to use samples in each cell and produce a near-upper Q function, whose corresponding greedy policy is efficient for exploration. MSEC is a totally model-free algorithm, which means no system dynamics is required during the implementation. It collects the system knowledge during the online learning. Based on PAC (Probability Approximately Correct) principle, MSCE can find a near-optimal policy in finite time bound online. To test the performance, an inverted pendulum is simulated and the results show the new algorithm is qualified for solving online optimal control problems.

Publication Title, e.g., Journal

Proceedings of the World Congress on Intelligent Control and Automation (WCICA)

Volume

2015-March

Issue

March

Share

COinS