Abstract
We propose a novel method for keeping track of multiple objects in provided regions of interest, i.e. object detections, specifically in cases where a single object results in multiple co-occurring detections (e.g. when objects exhibit unusual size or pose) or a single detection spans multiple objects (e.g. during occlusion). Our method identifies a minimal set of objects to explain the observed features, which are extracted from the regions of interest in a set of frames. Focusing on appearance rather than temporal cues, we treat video as an unordered collection of frames, and "unmix" object appearances from inaccurate detections within a Latent Dirichlet Allocation (LDA) framework, for which we propose an efficient Variational Bayes inference method. After the objects have been localized and their appearances have been learned, we can use the posterior distributions to "back-project" the assigned object features to the image and obtain segmentation at pixel level. In experiments on challenging datasets, we show that our batch method outperforms state-of-the-art batch and on-line multi-view trackers in terms of number of identity switches and proportion of correctly identified objects. We make our software and new dataset publicly available for non-commercial, benchmarking purposes.
Original language | English |
---|---|
Pages (from-to) | 103-116 |
Number of pages | 14 |
Journal | Computer Vision and Image Understanding |
Volume | 136 |
DOIs | |
Publication status | Published - 1 Jan 2015 |
Externally published | Yes |
Keywords
- Generative model
- Latent Dirichlet Allocation
- Object recognition
- Segmentation
- Unsupervised learning
- Video surveillance