Abstract
This article proposes a distributed uniform control approach for a dc solid state transformer (DCSST) that feeds constant power loads. The proposed approach utilizes a multiagent deep reinforcement learning (MADRL) technique to coordinate multiple control objectives. During the offline training stage, each DRL agent supervises a submodule (SM) of the DCSST, and outputs real-time actions based on the received states. Optimal phase-shift ratio combinations are learned using triple phase-shift modulation, and soft actor-critic (SAC) agents optimize the neural network parameters to enhance controller performance. The well-trained agents act as fast surrogate models that provide online control decisions for the DCSST, adapting to variant environmental conditions using only local SM information. The proposed distributed configuration improves redundancy and modularity, facilitating hot-swap experiments. Experimental results demonstrate the excellent performance of the proposed multiagent SAC algorithm.
Original language | English |
---|---|
Pages (from-to) | 5818-5829 |
Number of pages | 12 |
Journal | IEEE Transactions on Industrial Electronics |
Volume | 71 |
Issue number | 6 |
DOIs | |
Publication status | Published - Jun 1 2024 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 1982-2012 IEEE.
ASJC Scopus Subject Areas
- Control and Systems Engineering
- Electrical and Electronic Engineering
Keywords
- DC solid state transformer (DCSST)
- distributed uniform control
- multiagent deep reinforcement learning (MADRL)
- redundancy
Press/Media
-
Findings on Technology Reported by Investigators at Nanyang Technological University (Deep Reinforcement Learning-enabled Distributed Uniform Control for a Dc Solid State Transformer In Dc Microgrid)
12/7/23
1 item of Media coverage
Press/Media: Research