Deep Reinforcement Learning-Enabled Distributed Uniform Control for a DC Solid State Transformer in DC Microgrid

Yu Zeng*, Josep Pou, Changjiang Sun, Xinze Li, Gaowen Liang, Yang Xia, Suvajit Mukherjee, Amit Kumar Gupta

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

This article proposes a distributed uniform control approach for a dc solid state transformer (DCSST) that feeds constant power loads. The proposed approach utilizes a multiagent deep reinforcement learning (MADRL) technique to coordinate multiple control objectives. During the offline training stage, each DRL agent supervises a submodule (SM) of the DCSST, and outputs real-time actions based on the received states. Optimal phase-shift ratio combinations are learned using triple phase-shift modulation, and soft actor-critic (SAC) agents optimize the neural network parameters to enhance controller performance. The well-trained agents act as fast surrogate models that provide online control decisions for the DCSST, adapting to variant environmental conditions using only local SM information. The proposed distributed configuration improves redundancy and modularity, facilitating hot-swap experiments. Experimental results demonstrate the excellent performance of the proposed multiagent SAC algorithm.

Original languageEnglish
Pages (from-to)5818-5829
Number of pages12
JournalIEEE Transactions on Industrial Electronics
Volume71
Issue number6
DOIs
Publication statusPublished - Jun 1 2024
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1982-2012 IEEE.

ASJC Scopus Subject Areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Keywords

  • DC solid state transformer (DCSST)
  • distributed uniform control
  • multiagent deep reinforcement learning (MADRL)
  • redundancy

Cite this