Generating 3D person trajectories from sparse image annotations in an intelligent vehicles setting

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
125 Downloads (Pure)

Abstract

This paper presents an approach to generate dense person 3D trajectories from sparse image annotations on-board a moving platform. Our approach leverages the additional information that is typically available in an intelligent vehicle setting, such as LiDAR sensor measurements (to obtain 3D positions from detected 2D image bounding boxes) and inertial sensing (to perform ego-motion compensation). The sparse manual 2D person annotations that are available at regular time intervals (key-frames) are augmented with the output of a state-of-the-art 2D person detector, to obtain frame-wise data. A graph-based batch optimization approach is subsequently performed to find the best 3D trajectories, accounting for erroneous person detector output (false positives, false negatives, imprecise localization) and unknown temporal correspondences. Experiments on the EuroCity Persons dataset show promising results.
Original languageEnglish
Title of host publicationProceedings 2019 IEEE Intelligent Transportation Systems Conference (ITSC 2019)
Place of PublicationPiscataway, NJ, USA
PublisherIEEE
Pages783-788
ISBN (Print)978-1-5386-7024-8
DOIs
Publication statusPublished - 2019
EventIEEE Intelligent Transportation Systems Conference - Auckland, New Zealand
Duration: 27 Oct 201930 Oct 2019

Conference

ConferenceIEEE Intelligent Transportation Systems Conference
Abbreviated titleITSC 2019
Country/TerritoryNew Zealand
CityAuckland
Period27/10/1930/10/19

Bibliographical note

Accepted Author Manuscript

Keywords

  • Multi-Object Tracking
  • Intelligent Vehicles

Fingerprint

Dive into the research topics of 'Generating 3D person trajectories from sparse image annotations in an intelligent vehicles setting'. Together they form a unique fingerprint.

Cite this