Cross-sensor deep domain adaptation for LiDAR detection and segmentation

Christoph Rist, Markus Enzweiler, Dariu Gavrila

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

28 Citations (Scopus)
554 Downloads (Pure)

Abstract

A considerable amount of annotated training data is necessary to achieve state-of-the-art performance in perception tasks using point clouds. Unlike RGB-images, LiDAR point clouds captured with different sensors or varied mounting positions exhibit a significant shift in their input data distribution. This can impede transfer of trained feature extractors between datasets as it degrades performance vastly. We analyze the transferability of point cloud features between two different LiDAR sensor set-ups (32 and 64 vertical scanning planes with different geometry). We propose a supervised training methodology to learn transferable features in a pre-training step on LiDAR datasets that are heterogeneous in their data and label domains. In extensive experiments on object detection and semantic segmentation in a multi-task setup we analyze the performance of our network architecture under the impact of a change in the input data domain. We show that our pre-training approach effectively increases performance for both target tasks at once without having an actual multi-task dataset available for pre-training.
Original languageEnglish
Title of host publicationProceedings IEEE Symposium Intelligent Vehicles (IV 2019)
Place of PublicationPiscataway, NJ, USA
PublisherIEEE
Pages1535-1542
ISBN (Electronic)978-1-7281-0560-4
DOIs
Publication statusPublished - 2019
EventIEEE Intelligent Vehicles Symposium 2019 - Paris, France
Duration: 9 Jun 201912 Jun 2019

Conference

ConferenceIEEE Intelligent Vehicles Symposium 2019
Abbreviated titleIV 2019
Country/TerritoryFrance
CityParis
Period9/06/1912/06/19

Bibliographical note

Accepted Author Manuscript

Fingerprint

Dive into the research topics of 'Cross-sensor deep domain adaptation for LiDAR detection and segmentation'. Together they form a unique fingerprint.

Cite this