面 向 城 市 场 景 异 源 多 时 相 点 云 的 自 动 配 准

Translated title of the contribution: Automated Registration of Cross-Source and Multi-Temporal Point Clouds in Urban Areas

Zexin Yang, Qin Ye*, Xufei Wang, Peters Ravi

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)
28 Downloads (Pure)

Abstract

Objective Recent advancements in laser scanners and photogrammetry technology have significantly reduced the cost of acquiring 3D point clouds. Consequently, various types of point clouds have gradually become popular data sources for urban applications. The accurate registration of cross-source and multi-temporal point clouds must be ensured before developing applications based on 3D point clouds. However, this is a challenging task owing to (1) the large amount of data to be considered, (2) the wide discrepancy in characteristics between cross-source point clouds, and (3) the significant changes in a scene represented by multi-temporal point clouds. These data characteristics can harm the extraction and matching of registration primitives, resulting in the poor performance of marker-free registration techniques. In this paper, we propose an automated, efficient, and marker-free method for registering cross-source and multi-temporal point clouds in urban areas. Methods The proposed registration method comprises three stages: keypoint generation, correspondence matching, and transformation estimation. (1) Keypoint generation. We generate object-level virtual keypoints as registration primitives rather than directly extracting local features from point clouds, which are redundant and sensitive to outliers and missing data. Specifically, the ground points are first filtered out via the cloth simulation filtering algorithm. The remaining points are decomposed into planar segments by fitting planes in a region-growing manner. Finally, virtual keypoints are determined as the endpoints of intersecting line segments of two adjacent planes. (2) Correspondence matching. First, local triangles are constructed using the generated virtual keypoints as vertices to encode the relative spatial relationships among keypoints within a point cloud. Second, the triangle sets of both point clouds are mapped to a feature space where the triangles become 3D feature points. For each feature point in the source point cloud, we determine its closest point in the target point cloud, forming triangle pairs between the two point clouds. Finally, we propose an improved global matching approach with linear time complexity to extract correspondences encoded in the triangle pairs. (3) Transformation estimation. As cross-source and multi-temporal point clouds are typically well-leveled, registration can be achieved by aligning the two point clouds horizontally and translating them vertically. We use the horizontal coordinates of the correspondences to estimate the 2D horizontal transformation and their vertical coordinates to calculate the vertical translation. Results and Discussions We evaluated the effectiveness of the proposed method using large-scale real-world urban point clouds. The experimental data consist of six cross-source and multi-temporal point clouds, including three airborne light detection and ranging (LiDAR) point clouds and three photogrammetric point clouds, which cover an urban area of 1. 8 km2 in Rotterdam, the Netherlands. Each point cloud comprises a large number of points (approximately 20‒60 million points per point cloud; refer to Table 1 for details). Additionally, as the point clouds were collected over a long period of time, many of the objects in the scene have changed considerably. These two characteristics make them suitable for performing comprehensive evaluations of automatic marker-free registration methods. To evaluate the registration results qualitatively, we visualized a randomly selected region (Fig. 7) and three manually selected buildings with varying architectural styles (Fig. 8). Despite the different characteristics of cross-source point clouds and the significant changes in scenes, the proposed method could accurately align all five registration pairs formed by the six experimental point clouds. To evaluate the registration results quantitatively, we calculated both matrix-based errors (i. e., rotation and translation errors) as well as pointwise errors. The evaluation is summarized in Table 4. Our automatic registration results have an average pointwise error of 6.4 cm, whereas the average matrix-based errors are 0.2′for rotation and 7.4 cm for translation. Furthermore, despite the massive size of the experimental point clouds, the proposed approach required only 105.7 s to achieve pairwise registration on average. Both qualitative and quantitative results demonstrate the effectiveness of the proposed method for registering cross-source and multi-temporal urban point clouds. Conclusions A fully automated marker-free registration approach is presented for cross-source and multi-temporal point clouds in urban environments. Object-level virtual keypoints are generated from urban point clouds as registration primitives, thereby overcoming the challenge of identifying valid corresponding features. By encoding rigid body spatial relations among the generated virtual keypoints, we establish correspondences between the source and target point clouds, resulting in efficient matching for large-scale urban scenes. Experiments on real-world data demonstrate that the proposed method can automatically, accurately, and efficiently register cross-source and multi-temporal point clouds in urban areas, indicating its practical utility. In the future, we would like to collect more data to test the robustness of the proposed method. Moreover, we intend to study the potential of the proposed matching algorithm in the fusion of general multi-source data, e. g. , aligning 3D building point clouds with 2D building footprints.

Translated title of the contributionAutomated Registration of Cross-Source and Multi-Temporal Point Clouds in Urban Areas
Original languageChinese
Article number1010004
Number of pages11
JournalZhongguo Jiguang/Chinese Journal of Lasers
Volume50
Issue number10
DOIs
Publication statusPublished - 2023

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • cross-source and multitemporal point clouds
  • kinematics of rigid bodies
  • light detection and ranging
  • photogrammetric point clouds
  • point cloud registration
  • remote sensing

Fingerprint

Dive into the research topics of 'Automated Registration of Cross-Source and Multi-Temporal Point Clouds in Urban Areas'. Together they form a unique fingerprint.

Cite this