Journal of Spatial Science, pages 1-19

Registration method based on physical scale matching for 3D laser point cloud and high-resolution images

Publication typeJournal Article
Publication date2025-02-05
scimago Q2
SJR0.460
CiteScore5.0
Impact factor1
ISSN14498596, 18365655
Li X., Wang C., Zeng Z.
2024-09-01 citations by CoLab: 6 Abstract  
Due to the limited computational resources of the onboard computing devices of autonomous vehicles, the development of lightweight 3D object detectors is essential. Point-based detectors that progressively sample raw point clouds reduce numerous redundant computations and facilitate the implementation of high-speed 3D object detectors. The farthest point sampling based on Euclidean distance (D-FPS) is frequently utilized in point-based 3D detectors for point cloud sampling to reduce computation overhead. D-FPS can ensure uniform point sampling, covering the entire point cloud space as much as possible. The ratio of foreground and background points does not change significantly in the sampling results; hence, foreground objects do not receive sufficient attention. In addition, the number of road reflection points is large, and these points are close to foreground objects, reducing the sampling accuracy for faint objects. We propose weighted farthest point sampling based on Euclidean distance (W-DFPS), which selectively discards some road reflection points in the point sampling process, thus increasing the weight of foreground points in sampling results. It reduces the likelihood of faint objects being lost in the sampling results, such that a small number of sampling points can also cover most foreground objects. W-DFPS replaces D-FPS in the single-stage detector based on instance-aware (IA-SSD), and the network structure is slightly modified, namely the single-stage detector based on weighted sampling (WS-SSD). We evaluate WS-SSD in the KITTI dataset with a single A100 GPU. When the number of sampling points for the first module of WS-SSD is consistent with IA-SSD, the pedestrian detection accuracy is improved by an average of 4.06 compared to IA-SSD. Although the number of sampling points for the first module of WS-SSD is reduced to 25% of IA-SSD, the object detection accuracy remains competitive; it also achieves a state-of-the-art inference speed of 150.76 frames per second (FPS), which is a 46% improvement over IA-SSD.
Yuan Y., Wu Y., Fan X., Gong M., Ma W., Miao Q.
2024-09-01 citations by CoLab: 43
Guo W., Huang X., Qi B., Ren X., Chen H., Chen X.
2024-08-01 citations by CoLab: 13 Abstract  
The aerospace industry faces critical demands for automated and intelligent grinding of welds on curved surfaces. The combination of 3D vision technology and robot grinding system offers a feasible and promising solution. Local grinding path estimation remains a significant challenge due to the absence of a robust and fast method. To address this challenge, this article proposes a vision-guided method for robot grinding of spatial curved weld beads. Initially, a robust local point cloud descriptor is defined to identify and segment weld beads, generating Regions of Interest (ROI). Subsequently, Intrinsic Shape Signatures (ISS) key point detection is employed to extract points representing the trend of ROI, followed by Non-Uniform Rational B-Spline (NURBS) curve fitting for grinding path planning. Finally, an optimization objective function based on robot manipulability and pose difference is developed to enhance stability of machining. Curved weld grinding experiments on rocket skin are conducted and results demonstrate the highly accurate and robust, ensuring a removal error within 0.2 mm. This method is suitable for semi-precision machining or as a pre-stage in the high-precision machining of large workpieces.
Zhang J., Gu J., Hu T., Wang B., Xia Z.
2024-06-01 citations by CoLab: 5 Abstract  
Automated robots are emerging as a solution for labor-intensive fruit orchard management. Three-dimensional (3D) reconstruction of tree branches is a fundamental requirement for robots to perform tasks like pruning and fruit harvesting. Current branch sensing methods often rely on planar segmentation with limited 3D information or computationally expensive point cloud segmentation, which may not be suitable for natural orchards with obscured tree branches. This study proposes a novel scheme that reconstructs occluded branches from RGB-D (Red-Green-Blue-Depth) images by integrating the point clouds converted from planar segmentation masks and depth images. The proposed approach extends the existing 2D branch sensing techniques to 3D, leveraging multi-view information. The deep learning models DeeplabV3+ and Pix2pix are employed to generate the segmentation masks, separately. And the Fast Global Registration (FGR) is used to register the multi-view point clouds. The results demonstrate that the output point clouds have at least a 24 % increase in the number of corresponding points after FGR. Furthermore, the time cost per hundred corresponding points is reduced by 85 % and 69 % when using the DeepLabV3 + and Pix2pix-based schemes, respectively, compared to the PointNet++ approach. These findings indicate that the proposed scheme significantly improves the sensing of occluded branches in terms of output richness and computational efficiency, making it applicable to natural orchard working spaces.
Yuan Y., Wu Y., Lei J., Hu C., Gong M., Fan X., Ma W., Miao Q.
2024-01-05 citations by CoLab: 9
Chen C., Wu H., Yang Z., Li Y.
2023-06-01 citations by CoLab: 6 Abstract  
Light detection and ranging (LiDAR)-derived point cloud has become the standard spatial data for digital terrain model (DTM) construction; however, it suffers from huge data with much redundant information due to the oversampling. This often causes considerable inconvenience in the downstream data processing. To this end, an adaptive coarse-to-fine clustering and terrain feature-aware-based method is proposed to reduce data points in the context of terrain modeling in this paper. Firstly, a coarse-to-fine clustering method with the consideration of terrain complexity is developed to adaptively cluster LiDAR terrain points. Then, according to the geometric properties of terrain breaklines, a terrain feature-aware multi-strategy method is presented to pick representative points in the clusters. Finally, important boundary points including inflection point on the boundary curve and critical point on terrain features are further selected. The proposed method is compared with seven state-of-the-art point cloud simplification methods under six data reduction ratios on six plots with different terrain characteristics. Results indicate that the proposed method obtains a good balance between terrain-feature preservation and uniform distribution of data points. Compared to the state-of-the-art methods, the proposed method reduces the average root mean square errors (absolute errors) of the DTMs by 12.2%-51.7% (7.69%-83.8%) on the six plots. Moreover, the proposed method obtains the mean terrain slope and terrain roughness more reasonably approximate to the references. In short, the newly developed method can be considered as an alternative tool to select representative points from the huge remote-sensing-derived point cloud in the context of DTM production.
Wang Y., Hu X., Zhou T., Ma Y., Li Z.
Transactions in GIS scimago Q2 wos Q2
2023-05-19 citations by CoLab: 3 Abstract  
AbstractFacade structures from three‐dimensional (3D) point cloud data (PCD) and two‐dimensional (2D) optical images can provide significant information for 3D building modeling. However, a unified data model for integrating 2D imagery pixels and 3D PCD is absent in current methods, leading to a complex implementation process, large calculations, and inefficiency. An efficient facade structure extraction method for building facades is proposed in this study. Based on the conversion matrix, 2D image and 3D PCD information are merged to build an image‐based laser point cloud (ILPC) data model first. Second, both the line segment detection and random sample consensus algorithms are improved according to the structure and characteristics of the ILPC data model. Finally, building facade structures are extracted and optimized. Facade structures can be extracted accurately and efficiently by the proposed method, which contains rich information support from the ILPC data model. The proposed method extracts fine building facade structures with accuracy over 0.68 in all experiments and recall up to 0.81, which are better than the Wang method. Extracted structures constitute valuable support for numerous fields, such as 3D building modeling and building information modeling construction.
Arora M., Wiesmann L., Chen X., Stachniss C.
Robotics and Autonomous Systems scimago Q1 wos Q2
2023-01-01 citations by CoLab: 17 Abstract  
A clean and reliable map of the environment is key for a variety of robotic tasks including localization, path planning, and navigation. Dynamic objects are an inherent part of our world, but their presence often deteriorates the performance of various mapping algorithms. This not only makes it important but necessary to remove these dynamic points from the map before they can be used for other tasks such as path planning. In this paper, we address the problem of building maps of the static aspects of the world by detecting and removing dynamic points from the source point clouds. We target a map cleaning approach that removes the dynamic points and maintains a high quality map of the static part of the world. To this end, we propose a novel offline ground segmentation method and integrate it into the OctoMap to better distinguish between the moving objects and static road backgrounds. We evaluate our approach using SemanticKITTI for both, dynamic object removal and ground segmentation algorithms as well as on the Apollo dataset. The evaluation results show that our method outperforms the baseline methods in both tasks and achieves good performance in generating clean maps over different datasets without any change in the parameters.
Pexman K., Robson S.
Abstract. Aircraft wing manufacture is becoming increasingly digitalised. For example, it is becoming possible to produce on-line digital representations of individual structural elements, components and tools as they are deployed during assembly processes. When it comes to monitoring a manufacturing environment, imaging systems can be used to track objects as they move about the workspace, comparing actual positions, alignments, and spatial relationships with the digital representation of the manufacturing process. Active imaging systems such as laser scanners and laser trackers can capture measurements within the manufacturing environment, which can be used to deduce information about both the overall stage of manufacture and progress of individual tasks. This paper is concerned with the in-line extraction of spatial information such as the location and orientation of drilling templates which are used with hand drilling tools to ensure drilled holes are accurately located. In this work, a construction grade terrestrial laser scanner, the Leica RTC360, is used to capture an example aircraft wing section in mid-assembly from several scan locations. Point cloud registration uses 1.5” white matte spherical targets that are interchangeable with the SMR targets used by the Leica AT960 MR laser tracker, ensuring that scans are connected to an established metrology control network used to define the coordinate space. Point cloud registration was achieved to sub-millimetre accuracy when compared to the laser tracker network. The location of drilling templates on the surface of the wing skin are automatically extracted from the captured and registered point clouds. When compared to laser tracker referenced hole centres, laser scanner drilling template holes agree to within 0.2mm.
Hui Z., Yong-Jian Z., Lei Z., Xiao-Xue J., Li-Ying L.
Frontiers in Physics scimago Q2 wos Q2 Open Access
2022-11-03 citations by CoLab: 4 PDF Abstract  
With the increase of point cloud scale, the time required by traditional ICP-related point cloud registration methods increases dramatically, which cannot meet the registration requirements of large-scale point clouds. In this paper, a fast registration technique for large scale point clouds based on virtual viewpoint image generation is studied. Firstly, the projection image of color point cloud is generated by virtual viewpoint. Then, the feature is extracted based on ORB and the rotation and translation matrix is calculated. The experimental results show that the registration time of the proposed method is about 1s when the size of the point cloud is from 300,000 to 2 million, which is improved by 17–258 times compared with the traditional ICP registration method, and the registration error is reduced by 80% from ICP 5.0 to 1.0. This paper provides a new idea and method for large-scale color point cloud registration.
Luo R., Zhou Z., Chu X., Ma W., Meng J.
2022-08-01 citations by CoLab: 30 Abstract  
Aiming at visual deformation monitoring for construction scaffold temporary structures, a 3D deformation measurement method based on multi-threaded LiDAR point cloud is proposed. The method consists of two parts, which are point cloud alignment and Scaffold tube axes modeling. Point cloud alignment is performed based on the spatial geometry of the normal vectors and intersection points of homologous feature planes in the scene. The scaffold tube point clouds are extracted using the random sampling consistent principle (RANSAC), and the scaffold tube axes model is further obtained by segmental noise reduction and least-squares fitting. Finally, the 3D deformation monitoring of the scaffold is realized by comparing the tube axes models at different times. The maximum relative error of scaffold deformation is 9.09%. The method provides a new technique for the daily monitoring of construction scaffold groups and can be extended to vehicle-mounted LiDAR applications. • Research on point cloud alignment algorithm based on planar features. • A 3D deformation virtualization monitoring model of scaffolds was developed. • The maximum error in the measurement experiment was 9.09%.
Zhang Y., Cui Z.
IEEE Sensors Journal scimago Q1 wos Q2
2022-07-01 citations by CoLab: 6 Abstract  
Registration between terrestrial LiDAR and optical imagery plays a crucial role in information fusion. However, it is difficult to find reliable correlations among the different feature information of optical imagery and LiDAR point clouds. Therefore, in order to achieve high-precision registration of heterogeneous sensors, a method based on spherical epipolar line and spherical absolute orientation is proposed in this paper. The method firstly projects the LiDAR point clouds into spherical images based on the spherical imaging model and derives the spherical epipolar line equation. Then the relative and absolute orientations of the spherical LiDAR images and the optical images are performed based on manually selected control points. Finally, based on Harris corner extraction, combined with the geometric constraints of the spherical epipolar line and absolute orientation, dense matching between optical and LIDAR images are achieved, and all matching points are used as control points for registration to improve the accuracy of manually selected points registration. Multiple sets of test data are acquired outdoors using a FARO Focus S laser scanner, a Z + F IMAGER 5010C laser scanner, and a Ladybug5+ panoramic camera. The experimental results show that the method in this paper is practical and improves the accuracy of manual points selection registration, and the degree of improvement is related to the number of successfully matched corner points.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?