Risk-sensitive Distributional Reinforcement Learning for Flight Control

Peter Seres*, Cheng Liu*, Erik Jan van Kampen*

*Corresponding author for this work

Research output: Contribution to journalConference articleScientificpeer-review

2 Citations (Scopus)
32 Downloads (Pure)

Abstract

Recent aerospace systems increasingly demand model-free controller synthesis, and autonomous operations require adaptability to uncertainties in partially observable environments. This paper applies distributional reinforcement learning to synthesize risk-sensitive, robust model-free policies for aerospace control. We investigate the use of distributional soft actor-critic (DSAC) agents for flight control and compare their learning characteristics and tracking performance with the soft actor-critic (SAC) algorithm. The results show that (1) the addition of distributional critics significantly improves learning consistency, (2) risk-averse agents increase flight safety by avoiding uncertainties in the environment.

Original languageEnglish
Pages (from-to)2013-2018
Number of pages6
JournalIFAC-PapersOnLine
Volume56
Issue number2
DOIs
Publication statusPublished - 2023
Event22nd IFAC World Congress - Yokohama, Japan
Duration: 9 Jul 202314 Jul 2023

Keywords

  • Distributional reinforcement learning
  • Guidance
  • navigation and control of vehicles
  • Reinforcement learning control
  • Risk-sensitive learning

Fingerprint

Dive into the research topics of 'Risk-sensitive Distributional Reinforcement Learning for Flight Control'. Together they form a unique fingerprint.

Cite this