Optimal Operation of Networked Microgrids With Distributed Multi-Agent Reinforcement Learning
Document Type
Conference Proceeding
Date of Original Version
1-1-2024
Abstract
This paper presents distributed multi-agent deep reinforcement learning (MADRL) approach for optimizing power flow management in networked microgrids within distribution systems. In contrast to centralized training methods, our proposed approach leverages the multi-agent trust region policy optimization (MATRPO) algorithm to learn distributed policies that minimize operational costs while considering power flow constraints in distribution networks. We model the cooperation among networked MGs as a partially observable Markov game and learn the distributed policies via peer-to-peer communication to address the challenges associated with centralized training. This approach offers scalability and efficiency benefits. The proposed approach is evaluated on a 24.9 kV distributed network with interconnected 4.16 kV MGs based on modified IEEE-34 and IEEE-13 bus systems. By comparing our approach with state-of-the-art MADRL methods, we demonstrate its effectiveness in enabling cooperative optimization of networked MGs, showcasing its applicability for managing distributed energy systems.
Publication Title, e.g., Journal
IEEE Power and Energy Society General Meeting
Citation/Publisher Attribution
Li, Hepeng, and Haibo He. "Optimal Operation of Networked Microgrids With Distributed Multi-Agent Reinforcement Learning." IEEE Power and Energy Society General Meeting (2024). doi: 10.1109/PESGM51994.2024.10688871.