Abstract
Panoramic images are widely used in many scenes, especially in virtual reality and street view capture. However, they are new for street furniture identification which is usually based on mobile laser scanning point cloud data or conventional 2D images. This study proposes to perform semantic segmentation on panoramic images and transformed images to separate light poles and traffic signs from background implemented by pre-trained Fully Convolutional Networks (FCN). FCN is the most important model for deep learning applied on semantic segmentation for its end to end training process and pixel-wise prediction. In this study, we use FCN-8s model that pre-trained on cityscape dataset and finetune it by our own data. The results show that in both pre-trained model and fine-tuning, transformed images have better prediction results than panoramic images.
Original language | English |
---|---|
Pages (from-to) | 13-20 |
Number of pages | 8 |
Journal | International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives |
Volume | XLII |
Issue number | 2/W13 |
DOIs | |
Publication status | Published - 2019 |
Event | 4th ISPRS Geospatial Week 2019 - Enschede, Netherlands Duration: 10 Jun 2019 → 14 Jun 2019 https://www.gsw2019.org |
Keywords
- Fully Convolutional Networks
- Object Identification
- Panoramic Images
- Semantic Segmentation
- Street Furniture