Configuration of the Actor and Critic Network of the Deep Reinforcement Learning controller for Multi-Energy Storage System

Paula Páramo-Balsa, Francisco Gonzalez-Longatt, Martha N. Acosta , José Luis Rueda Torres, Peter Palensky, Francisco Sanchez, Juan Manuel Roldan-Fernandez, Manuel Burgos-Payán

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

19 Downloads (Pure)

Abstract

The computational burden and the time required to train a deep reinforcement learning (DRL) can be appreciable, especially for the particular case of a DRL control used for frequency control of multi-electrical energy storage (MEESS). This paper presents an assessment of four training configurations of the actor and critic network to determine the configuration training that produces the lower computational time, considering the specific case of frequency control of MEESS. The training configuration cases are defined considering two processing units: CPU and GPU and are evaluated considering serial and parallel computing using MATLAB® 2020b Parallel Computing Toolbox. The agent used for this assessment is the Deep Deterministic Policy Gradient (DDPG) agent. The environment represents the dynamic model to provide enhanced frequency response to the power system by controlling the state of charge of energy storage systems. Simulation results demonstrated that the best configuration to reduce the computational time is training both actor and critic network on CPU using parallel computing.
Original languageEnglish
Title of host publicationProceedings of the 2022 4th Global Power, Energy and Communication Conference (GPECOM)
Place of PublicationPiscataway
PublisherIEEE
Pages564-568
Number of pages5
ISBN (Electronic)978-1-6654-6925-8
ISBN (Print)978-1-6654-6926-5
DOIs
Publication statusPublished - 2022
Event2022 4th Global Power, Energy and Communication Conference (GPECOM) - Nevsehir, Turkey
Duration: 14 Jun 202217 Jun 2022
Conference number: 4th

Conference

Conference2022 4th Global Power, Energy and Communication Conference (GPECOM)
Country/TerritoryTurkey
CityNevsehir
Period14/06/2217/06/22

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • actor-network
  • critic network
  • deep reinforcement learning
  • energy storage systems
  • enhanced frequency response
  • parallel computing

Fingerprint

Dive into the research topics of 'Configuration of the Actor and Critic Network of the Deep Reinforcement Learning controller for Multi-Energy Storage System'. Together they form a unique fingerprint.

Cite this