A Lightweight Learning-based Visual-Inertial Odometry

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

163 Downloads (Pure)

Abstract

In this paper, we propose a learning-based lightweight visual-inertial odometry (VIO) based on an uncertainty-aware pose network and an extended Kalman filter (EKF). The pose network serving as the VIO vision front-end predicts the relative motion of the camera between consecutive image frames and estimates the prediction uncertainty. The training of the pose network can be conducted without requiring ground-truth labels. The distributions of visual measurements are fused with inertial measurements by an EKF that is the VIO back-end. Evaluations show that the proposed VIO fails to outperform a state-of-the-art feature-point-based VIO solution in accuracy. But it has high time efficiency, translational motion estimation with metric scale, estimation of gravity direction, and generalization to new environments. So, unlike most works on learning-based visual ego-motion estimation in the literature, the proposed VIO can be directly deployed on an MAV. The comparative studies of supervision signals and forms of translational motion prediction provide insights that can contribute to future research.
Original languageEnglish
Title of host publication14th annual international micro air vehicle conference and competition
EditorsD. Moormann
Pages65-72
Publication statusPublished - 2023
Event14th anual International Micro Air Vehicle Conference and Competition - Aachen , Germany
Duration: 11 Sept 202315 Sept 2023
Conference number: 14
https://2023.imavs.org/ (14th anual International Micro Air Vehicle Conference and Competition)

Conference

Conference14th anual International Micro Air Vehicle Conference and Competition
Abbreviated titleIMAV 2023
Country/TerritoryGermany
CityAachen
Period11/09/2315/09/23
Internet address

Fingerprint

Dive into the research topics of 'A Lightweight Learning-based Visual-Inertial Odometry'. Together they form a unique fingerprint.

Cite this