Abstract
While reinforcement learning (RL) and supervised learning provide powerful approaches for finding optimal controllers for complex systems, ensuring safety remains a critical challenge. In control problems, safety is typically defined as maintaining state and input constraint satisfaction throughout the system’s evolution. The key issue lies in balancing constraint satisfaction with computational efficiency in the presence of inevitable learning errors. This PhD thesis addresses this challenge across linear, piecewise affine (PWA), and nonlinear systems with various constraint structures.
| Original language | English |
|---|---|
| Qualification | Doctor of Philosophy |
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 20 Jan 2026 |
| Print ISBNs | 978-90-361-0834-8 |
| Electronic ISBNs | 978-94-6518-184-4 |
| DOIs | |
| Publication status | Published - 2026 |
Keywords
- learning-based control
- optimization-based control
- Reinforcement Leaning (RL)
- Safety-critical systems