Identifying multiple objects from their appearance in inaccurate detections

Julian F.P. Kooij, Gwenn Englebienne, Dariu M. Gavrila*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

5 Citations (Scopus)


We propose a novel method for keeping track of multiple objects in provided regions of interest, i.e. object detections, specifically in cases where a single object results in multiple co-occurring detections (e.g. when objects exhibit unusual size or pose) or a single detection spans multiple objects (e.g. during occlusion). Our method identifies a minimal set of objects to explain the observed features, which are extracted from the regions of interest in a set of frames. Focusing on appearance rather than temporal cues, we treat video as an unordered collection of frames, and "unmix" object appearances from inaccurate detections within a Latent Dirichlet Allocation (LDA) framework, for which we propose an efficient Variational Bayes inference method. After the objects have been localized and their appearances have been learned, we can use the posterior distributions to "back-project" the assigned object features to the image and obtain segmentation at pixel level. In experiments on challenging datasets, we show that our batch method outperforms state-of-the-art batch and on-line multi-view trackers in terms of number of identity switches and proportion of correctly identified objects. We make our software and new dataset publicly available for non-commercial, benchmarking purposes.

Original languageEnglish
Pages (from-to)103-116
Number of pages14
JournalComputer Vision and Image Understanding
Publication statusPublished - 1 Jan 2015
Externally publishedYes


  • Generative model
  • Latent Dirichlet Allocation
  • Object recognition
  • Segmentation
  • Unsupervised learning
  • Video surveillance


Dive into the research topics of 'Identifying multiple objects from their appearance in inaccurate detections'. Together they form a unique fingerprint.

Cite this