Description
Feedforward fully convolutional neural networks currently dominate in semantic segmentation of 3D point clouds. Despite their great success, they suffer from the loss of local information at low-level layers, posing significant challenges to accurate scene segmentation and precise object boundary delineation. Prior works either address this issue by post-processing or jointly learn object boundaries to implicitly improve feature encoding of the networks. These approaches often require additional modules which are difficult to integrate into the original architecture. To improve the segmentation near object boundaries, we propose a boundary-aware feature propagation mechanism. This mechanism is achieved by exploiting a multitask learning framework that aims to explicitly guide the boundaries to their original locations. With one shared encoder, our network outputs (i) boundary localization, (ii) prediction of directions pointing to the object’s interior, and (iii) semantic segmentation, in three parallel streams. The predicted boundaries and directions are fused to propagate the learned features to refine the segmentation. We conduct extensive experiments on the S3DIS and SensatUrban datasets against various baseline methods, demonstrating that our proposed approach yields consistent improvements by reducing boundary errors.
Bibliographical note
Funded by Delft AI Initiative.
| Date made available | 7 Dec 2023 |
|---|---|
| Publisher | TU Delft - 4TU.ResearchData |
Research output
- 1 Conference contribution
-
Push-the-Boundary: Boundary-Aware Feature Propagation for Semantic Segmentation of 3D Point Clouds
Du, S., İbrahimli, N., Stoter, J., Kooij, J. & Nan, L., 2022, Proceedings - 2022 International Conference on 3D Vision, 3DV 2022. Ceballos, C. (ed.). Prague: IEEE, p. 124-133 10 p.Research output: Chapter in Book/Conference proceedings/Edited volume › Conference contribution › Scientific
Open AccessFile8 Link opens in a new tab Citations (Scopus)120 Downloads (Pure)
Cite this
- DataSetCite