Abstract
This paper describes an implementation of a reinforcement learning-based framework applied to the control of a multi-copter rotorcraft. The controller is based on continuous state and action Q-learning. The policy is stored using a radial basis function neural network. Distance-based neuron activation is used to optimize the generalization algorithm for computational performance. The training proceeds off-line, using a reduced-order model of the controlled system. The model is identified and stored in the form of a neural network. The framework incorporates a dynamics inversion controller, based on the identified model. Simulated flight tests confirm the controller’s ability to track the reference state signal and outperform a conventional proportional-derivative(PD) controller. The contributions of the developed framework are a computationally-efficient method to store a Q-function generalization, continuous action selection based on local Q-function approximation and a combination of model identification and offline learning for inner-loop control of a UAV system.
Original language | English |
---|---|
Title of host publication | AIAA Scitech 2019 Forum |
Subtitle of host publication | 7-11 January 2019, San Diego, California, USA |
Number of pages | 19 |
ISBN (Electronic) | 978-1-62410-578-4 |
DOIs | |
Publication status | Published - 2019 |
Event | AIAA Scitech Forum, 2019 - San Diego, United States Duration: 7 Jan 2019 → 11 Jan 2019 https://arc.aiaa.org/doi/book/10.2514/MSCITECH19 |
Conference
Conference | AIAA Scitech Forum, 2019 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 7/01/19 → 11/01/19 |
Internet address |