Soft Actor-Critic Deep Reinforcement Learning for Fault Tolerant Flight Control

Killian Dally, E. van Kampen

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

7 Downloads (Pure)


Fault-tolerant flight control faces challenges, as developing a model-based controller for each unexpected failure is unrealistic, and online learning methods can handle limited system complexity due to their low sample efficiency. In this research, a model-free coupled-dynamics flight controller for a jet aircraft able to withstand multiple failure types is proposed. An offline trained cascaded Soft Actor-Critic Deep Reinforcement Learning controller is successful on highly coupled maneuvers, including a coordinated 40 degree bank climbing turn with a normalized Mean Absolute Error of 2.64%. The controller is robust to six failure cases, including the rudder jammed at -15 deg, the aileron effectiveness reduced by 70%, a structural failure, icing and a backward c.g. shift as the response is stable and the climbing turn is completed successfully. Robustness to biased sensor noise, atmospheric disturbances, and to varying initial flight conditions and reference signal shapes is also demonstrated.
Original languageEnglish
Title of host publicationAIAA SCITECH 2022 Forum
Number of pages20
ISBN (Electronic)978-1-62410-631-6
Publication statusPublished - 2022
EventAIAA SCITECH 2022 Forum - virtual event
Duration: 3 Jan 20227 Jan 2022


ConferenceAIAA SCITECH 2022 Forum


Dive into the research topics of 'Soft Actor-Critic Deep Reinforcement Learning for Fault Tolerant Flight Control'. Together they form a unique fingerprint.

Cite this