Neighbor-Aware Information Fusion for Point Cloud Classification and Segmentation
In recent years, with the development of technologies such as computer vision, machine learning, and deep learning, as well as the popularity of large-scale data collection devices, 3D point cloud processing has become increasingly important. 3D point cloud processing can be widely used in fields such as object recognition, robot navigation, building information modeling (BIM), and urban planning. With more and more 3D point cloud data acquired, it has become a challenge for present 3D point cloud processing models to accurately and efficiently process this data. To improve the accuracy of point cloud classification and segmentation tasks, this study proposes an improved point cloud classification and segmentation model based on neighborhood aware information fusion. The model includes a Fusion Neighbor Information Feature Enhancement (FNIFE) module, which connects points in the local neighborhood and obtains the features of the current point through the feature relationships between the points in the neighborhood. By enhancing the feature expression of the point, it reduces the feature loss caused by the feature extraction operation and improves the accuracy of point cloud classification. Additionally, the model includes a Reverse Transmission of Point Features (RToPF) module, in which interpolation parameters are adjusted to ensure that the enhanced feature information can be effectively transmitted, thereby improving the accuracy and computing speed of the model. Finally, to further improve classification accuracy further, a module containing the X-Conv operator is utilized in the model to replace the max-pooling in the original network and reduce the feature loss generated during feature extraction. Comparative experiments are conducted on ModelNet40, ShapeNet, S3DIS datasets and ScanNet datasets. The experimental results show that the overall accuracy of proposed model reaches 92.4%. The average accuracy reaches 90.2% in the point cloud classification task, and the average intersection ratio reaches 84.5% in the point cloud segmentation task, achieving superior performance in classification and segmentation tasks compared with the state-of-the-art models.