DeMatchNet: A Unified Framework for Joint Dehazing and Feature Matching in Adverse Weather Conditions
Current advancements in image processing technologies have led to significant progress; however, adverse weather conditions, including haze, snow, and rain, often degrade image quality, which in turn impacts the performance of deep learning-based image matching algorithms. Most existing methods attempt to correct blurred images prior to target detection, which increases network complexity and may result in the loss of potentially crucial information. To better integrate image restoration and image matching tasks, this paper presents DeMatchNet, an end-to-end integrated network framework that seamlessly combines the feature fusion attention network for single image dehazing (FFA-Net) dehazing module with the detector-free local feature matching with transformers (LoFTR) feature matching module. The proposed framework first designs an attention-based feature fusion module (FFM), which effectively merges the original hazy features with the dehazed features. This ensures that the generated dehazed features not only have improved visual quality, but also provide higher-quality input for subsequent feature matching. Subsequently, a feature alignment module (FA) performs scale and semantic adjustments on the fused features, enabling efficient sharing with the LoFTR module. This deep collaboration between dehazing and feature matching significantly reduces computational redundancy and enhances the overall performance. Experimental results on synthetic hazy datasets (based on MegaDepth and ETH3D) and real-world hazy datasets demonstrate that DeMatchNet outperforms the existing methods in terms of matching accuracy and robustness, showcasing its superior performance under challenging weather conditions.