Abstract
The computational burden and the time required to train a deep reinforcement learning (DRL) can be appreciable, especially for the particular case of a DRL control used for frequency control of multi-electrical energy storage (MEESS). This paper presents an assessment of four training configurations of the actor and critic network to determine the configuration training that produces the lower computational time, considering the specific case of frequency control of MEESS. The training configuration cases are defined considering two processing units: CPU and GPU and are evaluated considering serial and parallel computing using MATLAB® 2020b Parallel Computing Toolbox. The agent used for this assessment is the Deep Deterministic Policy Gradient (DDPG) agent. The environment represents the dynamic model to provide enhanced frequency response to the power system by controlling the state of charge of energy storage systems. Simulation results demonstrated that the best configuration to reduce the computational time is training both actor and critic network on CPU using parallel computing.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2022 4th Global Power, Energy and Communication Conference (GPECOM) |
Place of Publication | Piscataway |
Publisher | IEEE |
Pages | 564-568 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-6654-6925-8 |
ISBN (Print) | 978-1-6654-6926-5 |
DOIs | |
Publication status | Published - 2022 |
Event | 2022 4th Global Power, Energy and Communication Conference (GPECOM) - Nevsehir, Turkey Duration: 14 Jun 2022 → 17 Jun 2022 Conference number: 4th |
Conference
Conference | 2022 4th Global Power, Energy and Communication Conference (GPECOM) |
---|---|
Country/Territory | Turkey |
City | Nevsehir |
Period | 14/06/22 → 17/06/22 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-careOtherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Keywords
- actor-network
- critic network
- deep reinforcement learning
- energy storage systems
- enhanced frequency response
- parallel computing