Abstract
This paper presents an approach to generate dense person 3D trajectories from sparse image annotations on-board a moving platform. Our approach leverages the additional information that is typically available in an intelligent vehicle setting, such as LiDAR sensor measurements (to obtain 3D positions from detected 2D image bounding boxes) and inertial sensing (to perform ego-motion compensation). The sparse manual 2D person annotations that are available at regular time intervals (key-frames) are augmented with the output of a state-of-the-art 2D person detector, to obtain frame-wise data. A graph-based batch optimization approach is subsequently performed to find the best 3D trajectories, accounting for erroneous person detector output (false positives, false negatives, imprecise localization) and unknown temporal correspondences. Experiments on the EuroCity Persons dataset show promising results.
Original language | English |
---|---|
Title of host publication | Proceedings 2019 IEEE Intelligent Transportation Systems Conference (ITSC 2019) |
Place of Publication | Piscataway, NJ, USA |
Publisher | IEEE |
Pages | 783-788 |
ISBN (Print) | 978-1-5386-7024-8 |
DOIs | |
Publication status | Published - 2019 |
Event | IEEE Intelligent Transportation Systems Conference - Auckland, New Zealand Duration: 27 Oct 2019 → 30 Oct 2019 |
Conference
Conference | IEEE Intelligent Transportation Systems Conference |
---|---|
Abbreviated title | ITSC 2019 |
Country/Territory | New Zealand |
City | Auckland |
Period | 27/10/19 → 30/10/19 |
Bibliographical note
Accepted Author ManuscriptKeywords
- Multi-Object Tracking
- Intelligent Vehicles