Simple pair pose - Pairwise human pose estimation in dense urban traffic scenes

Markus Braun, Fabian B. Flohr, Sebastian Krebs, Ulrich Kresse, Dariu M. Gavrila

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)

Abstract

Despite the success of deep learning, human pose estimation remains a challenging problem in particular in dense urban traffic scenarios. Its robustness is important for followup tasks like trajectory prediction and gesture recognition. We are interested in human pose estimation in crowded scenes with overlapping pedestrians, in particular pairwise constellations. We propose a new top-down method that relies on pairwise detections as input and jointly estimates the two poses of such pairs in a single forward pass within a deep convolutional neural network. As availability of automotive datasets providing poses and a fair amount of crowded scenes is limited, we extend the EuroCity Persons dataset by additional images and pose annotations. With 46, 975 images and poses of 279, 329 persons our new EuroCity Persons Dense Pose dataset is the largest pose dataset recorded from a moving vehicle. In our experiments using this dataset we show improved performance for poses of pedestrian pairs in comparison with a state of the art method for human pose estimation in crowds.

Original languageEnglish
Title of host publicationProceedings of the 32nd IEEE Intelligent Vehicles Symposium, IV 2021
Place of PublicationPiscataway, NJ, USA
PublisherIEEE
Pages1545-1552
ISBN (Electronic)978-1-7281-5394-0
DOIs
Publication statusPublished - 2021
Event32nd IEEE Intelligent Vehicles Symposium, IV 2021 - Nagoya, Japan
Duration: 11 Jul 202117 Jul 2021

Conference

Conference32nd IEEE Intelligent Vehicles Symposium, IV 2021
Country/TerritoryJapan
CityNagoya
Period11/07/2117/07/21

Fingerprint

Dive into the research topics of 'Simple pair pose - Pairwise human pose estimation in dense urban traffic scenes'. Together they form a unique fingerprint.

Cite this