A novel fault tolerant control algorithm is proposed in this paper based on model reference reinforcement learning for autonomous surface vehicles subject to sensor faults and model uncertainties. The proposed control scheme is a combination of a model-based control approach and a data-driven method, so it can leverage the advantages of both sides. The proposed design contains a baseline controller that ensures stable tracking performance at healthy conditions, a fault observer that estimates sensor faults, and a reinforcement learning module that learns to accommodate sensor faults using fault estimation and compensate for model uncertainties. The impact of sensor faults and model uncertainties can be effectively mitigated by this composite design. Stable tracking performance can also be ensured even at both the offline training and online implementation stages for the learning-based fault tolerant control. A numerical simulation with gyro sensor faults is presented to demonstrate the efficiency of the proposed algorithm.
|Title of host publication||Proceedings of the 60th IEEE Conference on Decision and Control (CDC 2021)|
|Publication status||Published - 2021|
|Event||60th IEEE Conference on Decision and Control (CDC 2021) - Austin, United States|
Duration: 14 Dec 2021 → 17 Dec 2021
|Conference||60th IEEE Conference on Decision and Control (CDC 2021)|
|Period||14/12/21 → 17/12/21|
Bibliographical noteGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.