Deep reinforcement learning for active flow control in a turbulent separation bubble

Bernat Font, Francisco Alcántara-Ávila, Jean Rabault, Ricardo Vinuesa, Oriol Lehmkuhl

Research output: Contribution to journalArticleScientificpeer-review

11 Downloads (Pure)

Abstract

The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines
Original languageEnglish
Article number1422
Number of pages13
JournalNature Communications
Volume16
Issue number1
DOIs
Publication statusPublished - 2025

Fingerprint

Dive into the research topics of 'Deep reinforcement learning for active flow control in a turbulent separation bubble'. Together they form a unique fingerprint.

Cite this