Real-Time Residential Demand Response
Document Type
Article
Date of Original Version
9-1-2020
Abstract
This paper presents a real-time demand response (DR) strategy for optimal scheduling of home appliances. The uncertainty of the resident's behavior, real-time electricity price, and outdoor temperature is considered. An efficient DR scheduling algorithm based on deep reinforcement learning (DRL) is proposed. Unlike traditional model-based approaches, the proposed approach is model-free and does not need to know the distribution of the uncertainty. Besides, unlike conventional RL-based methods, the proposed approach can handle both discrete and continuous actions to jointly optimize the schedules of different types of appliances. In the proposed approach, an approximate optimal policy based on neural network is designed to learn the optimal DR scheduling strategy. The neural network based policy can directly learn from high-dimensional sensory data of the appliance states, real-time electricity price, and outdoor temperature. A policy search algorithm based upon trust region policy optimization (TRPO) is used to train the neural network. The effectiveness of our proposed approach is validated by simulation studies where the real-world electricity price and outdoor temperature are used.
Publication Title, e.g., Journal
IEEE Transactions on Smart Grid
Volume
11
Issue
5
Citation/Publisher Attribution
Li, Hepeng, Zhiqiang Wan, and Haibo He. "Real-Time Residential Demand Response." IEEE Transactions on Smart Grid 11, 5 (2020): 4144-4154. doi: 10.1109/TSG.2020.2978061.