Visual Navigation for Tiny Drones

Research output: ThesisDissertation (TU Delft)

33 Downloads (Pure)

Abstract

In recent years, the use of drones in practical applications has seen a rapid increase, for instance in inspection, agriculture or environmental research. Most of these drones have a span in the order of tens of centimeters and a weight of half a kilogram or more. Smaller drones offer advantages in terms of safety and cost. However, their reduced payload capacity makes it difficult to carry the sensors and computers required for autonomous operation.

One of the most essential tasks an autonomous drone needs to perform is navigation. Here, navigation is defined as the ability to move towards a specified location while avoiding obstacles along the way. Ideally, the drone should also remember traveled routes, to make the return journey more efficient. However, on tiny drones (palm-size or smaller) the on-board processing power is often limited to a single microcontroller and the choice of sensors is limited. Cameras are popular sensors for tiny drones, because they're small, lightweight and passive, although they do require some processing power to produce useful results. The goal of this dissertation is to find a new, visual navigation strategy that fits within the constraints of these tiny drones.

First, existing work in terms of visual perception and avoidance is reviewed. Multiple options exist for visual perception: stereo vision, optical flow and monocular vision. All of these options are discussed and compared, leading to the conclusion that stereo vision performs best at shorter distances albeit at the cost of an additional camera, while monocular vision performs better at longer distances. Optical flow is ruled out for avoidance, as it has excessively large errors precisely in the direction of movement.
For avoidance, the options in terms of motion planning, map types and odometry are discussed. Perhaps unsurprisingly, the optimal choice is found to be dependent on the application. For computational efficiency on tiny drones, the most important choice is whether multiple measurements should be fused into a single map, or if individual percepts are good enough for avoidance. The latter is significantly less computationally demanding. For visual odometry, the depth information should be used if available, and the IMU can provide efficiency benefits in feature tracking. At Preliminary results are shown for monocular vision, visual odometry and obstacle avoidance.

Secondly, the dissertation takes a deeper dive into monocular depth estimation. Monocular depth estimation has the advantage that it only needs a single camera -- which saves valuable weight on tiny drones -- but its processing is more complex. The goal of this chapter is to analyze the learned behavior of neural networks for monocular depth perception, to see if this can be distilled into simple, lightweight algorithms. Using experiments based on data augmentation, it is shown that all four of the analyzed networks rely on the vertical position of objects in the image to estimate their depth. While this cue would be simple to replicate, it does depend on a known pose of the camera. Further investigation shows that the networks have a strong prior `assumption' about this pose, which may make transfer to drones more difficult. Finally, the networks need to have some sense of an `object'. In this case, it is shown that various shapes are recognized as an object provided that they have contrasting outlines and a dark shadow at the bottom. While this last feature is clearly present in the car-based KITTI dataset, it may not transfer directly to other environments. However, the vertical position cue can likely be used to provide monocular depth estimates to resource-limited systems such as tiny drones.

Thirdly, the remembering of traveled routes is investigated. Traditional mapping strategies from robotics would quickly run out of memory on microcontrollers, especially over longer trajectories. Instead, inspiration for a memory-efficient route-following strategy is found in nature. Here, insects are able to remember and follow remarkably long routes despite their tiny brains. Their strategy is often broken up into a few components, most notably path integration (odometry in robotics) and visual homing. We implement a novel strategy based on these components on a 56-gram drone. Here, the focus lies on traveling long distances using odometry, while periodically using visual homing to return to known locations to counteract odometric drift. The proposed strategy is demonstrated over multiple experiments, where the most efficient run required only 0.65 kilobytes to remember a route of 56 meters. This shows that tiny drones can retrace known paths by combining odometry with periodic homing maneuvers to counteract drift.

Finally, the avoidance of obstacles is discussed in the conclusion of this dissertation. This research has been performed by MSc students under my supervision, who have found and demonstrated that bug algorithms are an effective navigation strategy in three-dimensional, limited-field-of-view applications and provide a lightweight goal-oriented avoidance strategy that is suitable for tiny drones.

By combining all of the above results, a full navigation strategy for tiny drones can be proposed: tiny drones can visually navigate by using lightweight monocular vision algorithms to perceive obstacles, three-dimensional bug algorithms to avoid them while moving to new locations, and odometry and visual homing to retrace known paths.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • de Croon, G.C.H.E., Supervisor
  • de Wagter, C., Advisor
Award date15 Nov 2024
Print ISBNs978-94-6384-675-2
Electronic ISBNs978-94-6384-675-2
DOIs
Publication statusPublished - 2024

Keywords

  • Micro Aerial Vehicles
  • Micro Aerial Vehicle
  • MAV
  • Visual navigation
  • Obstacle avoidance
  • Route following
  • Depth perception
  • Computer vision

Fingerprint

Dive into the research topics of 'Visual Navigation for Tiny Drones'. Together they form a unique fingerprint.

Cite this