TY - JOUR
T1 - Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at ReD=3900
AU - Suárez, Pol
AU - Alcántara-Ávila, Francisco
AU - Miró, Arnau
AU - Rabault, Jean
AU - Font, Bernat
AU - Lehmkuhl, Oriol
AU - Vinuesa, Ricardo
PY - 2025
Y1 - 2025
N2 - This study presents novel drag reduction active-flow-control (AFC) strategies for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of ReD=3900. The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. This work introduces a multi-stage training approach to expand the exploration space and enhance drag reduction stabilization. By accelerating training through the exploitation of local invariants with MARL, a drag reduction of approximately 9% is achieved. The cooperative closed-loop strategy developed by the agents is sophisticated, as it utilizes a wide bandwidth of mass-flow-rate frequencies, which classical control methods are unable to match. Notably, the mass cost efficiency is demonstrated to be two orders of magnitude lower than that of classical control methods reported in the literature. These developments represent a significant advancement in active flow control in turbulent regimes, critical for industrial applications.
AB - This study presents novel drag reduction active-flow-control (AFC) strategies for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of ReD=3900. The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. This work introduces a multi-stage training approach to expand the exploration space and enhance drag reduction stabilization. By accelerating training through the exploitation of local invariants with MARL, a drag reduction of approximately 9% is achieved. The cooperative closed-loop strategy developed by the agents is sophisticated, as it utilizes a wide bandwidth of mass-flow-rate frequencies, which classical control methods are unable to match. Notably, the mass cost efficiency is demonstrated to be two orders of magnitude lower than that of classical control methods reported in the literature. These developments represent a significant advancement in active flow control in turbulent regimes, critical for industrial applications.
KW - Active flow control
KW - Deep learning
KW - Drag reduction
KW - Fluid mechanics
KW - Multi-agent reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=86000319598&partnerID=8YFLogxK
U2 - 10.1007/s10494-025-00642-x
DO - 10.1007/s10494-025-00642-x
M3 - Review article
AN - SCOPUS:86000319598
SN - 1386-6184
JO - Flow, Turbulence and Combustion
JF - Flow, Turbulence and Combustion
ER -