State-of-the-art stixel methods fuse dense stereo and semantic class information, e.g. from a Convolutional Neural Network (CNN), into a compact representation of driveable space, obstacles, and background. However, they do not explicitly differentiate instances within the same class. We investigate several ways to augment single-frame stixels with instance information, which can similarly be extracted by a CNN from the color input. As a result, our novel Instance Stixels method efficiently computes stixels that do account for boundaries of individual objects, and represents individual instances as grouped stixels that express connectivity. Experiments on Cityscapes demonstrate that including instance information into the stixel computation itself, rather than as a post-processing step, increases Instance AP performance with approximately the same number of stixels. Qualitative results confirm that segmentation improves, especially for overlapping objects of the same class. Additional tests with ground truth instead of CNN output show that the approach has potential for even larger gains. Our Instance Stixels software is made freely available for non-commercial research purposes.
|Title of host publication||Proceedings IEEE Symposium Intelligent Vehicles (IV 2019, Paris)|
|Place of Publication||Piscataway, NJ, USA|
|Publication status||Published - 2019|
|Event||IEEE Intelligent Vehicles Symposium 2019 - Paris, France|
Duration: 9 Jun 2019 → 12 Jun 2019
|Conference||IEEE Intelligent Vehicles Symposium 2019|
|Abbreviated title||IV 2019|
|Period||9/06/19 → 12/06/19|
Bibliographical noteGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.