Abstract
The deep reinforcement learning-based energy management strategies (EMS) have become a promising solution for hybrid electric vehicles (HEVs). When driving cycles are changed, the neural network will be retrained, which is a time-consuming and laborious task. A more efficient way of choosing EMS is to combine deep reinforcement learning (DRL) with transfer learning, which can transfer knowledge of one domain to the other new domain, making the network of the new domain reach convergence values quickly. Different exploration methods of DRL, including adding action space noise and parameter space noise, are compared against each other in the transfer learning process in this work. Results indicate that the network added parameter space noise is more stable and faster convergent than the others. In conclusion, the best exploration method for transferable EMS is to add noise in the parameter space, while the combination of action space noise and parameter space noise generally performs poorly. Our code is available at https://github.com/BIT-XJY/RL-based-Transferable-EMS.git.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV) |
Publisher | IEEE |
Pages | 470-477 |
Number of pages | 8 |
ISBN (Electronic) | 9781665488211 |
ISBN (Print) | 978-1-6654-8821-1 |
DOIs | |
Publication status | Published - 2022 |
Event | 2022 IEEE Intelligent Vehicles Symposium (IV) - Aachen, Germany Duration: 5 Jun 2022 → 9 Jun 2022 |
Conference
Conference | 2022 IEEE Intelligent Vehicles Symposium (IV) |
---|---|
Country/Territory | Germany |
City | Aachen |
Period | 5/06/22 → 9/06/22 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-careOtherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.