EuroCity persons: A novel benchmark for person detection in traffic scenes

Research output: Contribution to journalArticleScientificpeer-review

176 Citations (Scopus)
631 Downloads (Pure)

Abstract

Big data has had a great share in the success of deep learning in computer vision. Recent works suggest that there is significant further potential to increase object detection performance by utilizing even bigger datasets. In this paper, we introduce the EuroCity Persons dataset, which provides a large number of highly diverse, accurate and detailed annotations of pedestrians, cyclists and other riders in urban traffic scenes. The images for this dataset were collected on-board a moving vehicle in 31 cities of 12 European countries. With over 238,200 person instances manually labeled in over 47,300 images, EuroCity Persons is nearly one order of magnitude larger than datasets used previously for person detection in traffic scenes. The dataset furthermore contains a large number of person orientation annotations (over 211,200). We optimize four state-of-the-art deep learning approaches (Faster R-CNN, R-FCN, SSD and YOLOv3) to serve as baselines for the new object detection benchmark. In experiments with previous datasets we analyze the generalization capabilities of these detectors when trained with the new dataset. We furthermore study the effect of the training set size, the dataset diversity (day- versus night-time, geographical region), the dataset detail (i.e., availability of object orientation information) and the annotation quality on the detector performance. Finally, we analyze error sources and discuss the road ahead.

Original languageEnglish
Pages (from-to)1844-1861
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume41
Issue number8
DOIs
Publication statusPublished - 2019

Bibliographical note

Accepted Author Manuscript

Keywords

  • Object detection
  • benchmarking

Fingerprint

Dive into the research topics of 'EuroCity persons: A novel benchmark for person detection in traffic scenes'. Together they form a unique fingerprint.

Cite this