Abstract
Recent aerospace systems increasingly demand model-free controller synthesis, and autonomous operations require adaptability to uncertainties in partially observable environments. This paper applies distributional reinforcement learning to synthesize risk-sensitive, robust model-free policies for aerospace control. We investigate the use of distributional soft actor-critic (DSAC) agents for flight control and compare their learning characteristics and tracking performance with the soft actor-critic (SAC) algorithm. The results show that (1) the addition of distributional critics significantly improves learning consistency, (2) risk-averse agents increase flight safety by avoiding uncertainties in the environment.
| Original language | English |
|---|---|
| Pages (from-to) | 2013-2018 |
| Number of pages | 6 |
| Journal | IFAC-PapersOnLine |
| Volume | 56 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 2023 |
| Event | 22nd IFAC World Congress - Yokohama, Japan Duration: 9 Jul 2023 → 14 Jul 2023 |
Keywords
- Distributional reinforcement learning
- Guidance
- navigation and control of vehicles
- Reinforcement learning control
- Risk-sensitive learning
Fingerprint
Dive into the research topics of 'Risk-sensitive Distributional Reinforcement Learning for Flight Control'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver