Visual Navigation and Optimal Control for Autonomous Drone Racing

S. Li

Research output: ThesisDissertation (TU Delft)

43 Downloads (Pure)

Abstract

Drones, especially quadrotors, have shown their great value for applications like aerial photography, object delivery and warehouse inspection. At the same time, with the de- velopment of Artificial Intelligence (AI), computers can replace humans and even per- form better than humans in some areas where it was impossible before like the AI pro- gram Alpha Go which beat the human world champion in Go matches and Alpha star which was rated above 99.8% human players in the real-time strategy game StarCraft II. Concerning drones, the question is whether they can fly races completely by themselves and if they can fly even faster than human pilots’ racing drones? Although there exist many technologies for drones to fly autonomously in terms of navigation, guidance and control, autonomous drone racing still sets an enormous chal- lenge for the robotics community. For example, the most commonly used vision camera based navigation technologies such as Simultaneous Localization and Mapping (SLAM) and Visual Inertial Odometry (VIO) suffer motion blur when the drone moves fast and high computational demand which is scarce onboard the drone. Moreover, the com- monly used PID controller has no guarantees of optimality while much parameter tuning is required. Many other challenges like these require new technologies to satisfy more complex and challenging flying scenarios to challenge humans in drone races. This thesis attempts to answer the question mentioned above. First of all, this the- sis presents 2 systematic solutions for autonomous drone racing including navigation, guidance and control techniques. The solutions are computationally so efficient that they can run on board of a Bebop 1 quadrotor (made in 2014) without using the GPU and a cheap 72-gram quadrotor called the ’Trashcan’. With the constraints of the processing power and cheap onboard sensors, the Bebop can fly through 15 gates in a complex sce- nario with an average speed of 1.5m/s and the Trashcan can fly through a 4-gate racing track for 3 laps with an average speed of 2m/s. Both solutions helped the MAVLab, TU Delft, participate in the IROS autonomous drone racing in 2017 and 2018. In terms of visual navigation, a computationally efficient gate detection method ’snake gate’ is developed to detect the racing gate during the flight. Together with a revised version of Perspective-3-Point (P3P) method, the detection results are used to provide location information for the drone. A Kalman filter is developed to fuse these detec- tions with the onboard IMU readings. Unlike the traditional Kalman filter, this version deduces the velocity from the accelerometers readings by a linear drag model approx- imation instead of integrating the accelerometers. In this way, the Kalman filter has a faster convergence rate. Another filtering method, Visual Model-predictive Localization (VML), is also developed to fuse the vision detections and onboard attitude estimation. The simulation and real-world flight results show that the VML is more robust to outliers than the commonly used Kalman filter especially when there are invalid measurements. Also, the VML is more efficient than the Kalman filter in handling measurement delays. At last, a gradient descent based parameter estimation method is developed to estimate the quadrotor’s aerodynamics coefficients and the Attitude and heading reference sys- tem (AHRS) biases using the visual measurements and the onboard state predictions. With the estimated parameters, the quadrotor can have a better state prediction when no visual measurement is available in some time. In terms of guidance and control, a novel neural network based nonlinear optimal controller, G&CNet, is developed to steer the drone to the target with the minimum time. This G&CNet moves the time-consuming nonlinear controller onboard and can be run at 200HZ to map the current states and the optimal control policy calculated offboard. The simulation results show that the flying result is very close to the theoretical nonlinear optimal control solution. Both simulation and real-world flying results show that it has faster flights than a commonly used polynomial based trajectory generation and tracking method. Last but not least, the methods provided can be generalized to other applications. For example, for the outdoor flight where the Global Positioning System (GPS) is avail- able for navigation, the vision measurements can be directly replaced by the GPS signals in the proposed navigation strategies and they should work directly. For the proposed G&CNet, it should work in all scenarios where the guidance and control modules are needed to move the drone from one point to another point. In this way, the proposed methods allow drones to move faster in a robust way, extending their mission capabili- ties.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • de Croon, G.C.H.E., Supervisor
  • de Visser, C.C., Advisor
Award date12 Nov 2020
Print ISBNs978-94-6384-175-7
DOIs
Publication statusPublished - 2020

Keywords

  • Autonomous drone racing
  • visual navigation
  • nonlinear model- predictive control

Fingerprint

Dive into the research topics of 'Visual Navigation and Optimal Control for Autonomous Drone Racing'. Together they form a unique fingerprint.

Cite this