CNN-based Ego-Motion Estimation for Fast MAV Maneuvers

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

In the field of visual ego-motion estimation for Micro Air Vehicles (MAVs), fast maneuvers stay challenging mainly because of the big visual disparity and motion blur. In the pursuit of higher robustness, we study convolutional neural networks (CNNs) that predict the relative pose between subsequent images from a fast-moving monocular camera facing a planar scene. Aided by the Inertial Measurement Unit (IMU), we mainly focus on translational motion. The networks we study have similar small model sizes (around 1.35MB) and high inference speeds (around 10 milliseconds on a mobile GPU). Images for training and testing have realistic motion blur. Departing from a network framework that iteratively warps the first image to match the second with cascaded network blocks, we study different network architectures and training strategies. Simulated datasets and a self-collected MAV flight dataset are used for evaluation. The proposed setup shows better accuracy over existing networks and traditional feature-point-based methods during fast maneuvers. Moreover, self-supervised learning outperforms supervised learning. Videos and open-sourced code are available at https://github. com/tudelft/PoseNet_Planar
Original languageEnglish
Title of host publication2021 IEEE International Conference on Robotics and Automation (ICRA)
Subtitle of host publicationProceedings
PublisherIEEE
Pages7606-7612
Number of pages7
ISBN (Electronic)978-1-7281-9077-8
ISBN (Print)978-1-7281-9078-5
DOIs
Publication statusPublished - 2021
EventICRA 2021: IEEE International Conference on Robotics and Automation - Hybrid at Xi'an, China
Duration: 30 May 20215 Jun 2021

Conference

ConferenceICRA 2021
CountryChina
CityHybrid at Xi'an
Period30/05/215/06/21

Fingerprint

Dive into the research topics of 'CNN-based Ego-Motion Estimation for Fast MAV Maneuvers'. Together they form a unique fingerprint.

Cite this