MFF-Net: A multi-view feature fusion network for generalized forgery image detection
Publication type: Journal Article
Publication date: 2025-08-01
scimago Q1
wos Q1
SJR: 1.471
CiteScore: 13.6
Impact factor: 6.5
ISSN: 09252312, 18728286
Abstract
With the rapid advancement of artificial intelligence, generative models now produce images nearly indistinguishable from real-world scenarios, challenging the authenticity of media content. Existing forgery detection methods, however, are often model-specific and lack generalization. To address this, we propose a novel detection framework that, for the first time, integrates spatial, frequency, and texture features, leveraging three distinct perspectives simultaneously. The framework is composed of three modules for feature extraction and an adaptive fusion module for integrating multi-view features. Specifically, the multi-level spatial module processes both the macrostructure and microdetails of images through global and local feature branches, enabling the extraction of multi-level spatial features. The dual-frequency module employs two distinct frequency analysis techniques to capture frequency domain features, while the GramTexture module utilizes an improved ResNet18 architecture to extract shallow texture features. A dynamic adaptive feature fusion method, integrating multi-head attention mechanisms with adaptive weight allocation, is employed to effectively combine multi-view features. Extensive comparative experiments demonstrate the proposed framework’s superior performance, achieving exceptional generalization and robustness on 18 diverse test datasets. Notably, using only 10% of the ProGAN training dataset, our method achieves an average improvement of 13.70% in mean average precision (mAP) over baseline methods. These results highlight the framework’s effectiveness in handling previously unseen datasets. The implementation is publicly available at the following GitHub repository link.
Found
Nothing found, try to update filter.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
0
Total citations:
0
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Lin Y. et al. MFF-Net: A multi-view feature fusion network for generalized forgery image detection // Neurocomputing. 2025. Vol. 640. p. 130351.
GOST all authors (up to 50)
Copy
Lin Y., Mao T., Chen Z., Lu H., Chen Z., Kang Y. MFF-Net: A multi-view feature fusion network for generalized forgery image detection // Neurocomputing. 2025. Vol. 640. p. 130351.
Cite this
RIS
Copy
TY - JOUR
DO - 10.1016/j.neucom.2025.130351
UR - https://linkinghub.elsevier.com/retrieve/pii/S0925231225010239
TI - MFF-Net: A multi-view feature fusion network for generalized forgery image detection
T2 - Neurocomputing
AU - Lin, Ying
AU - Mao, Tenglong
AU - Chen, Ziyi
AU - Lu, Hong
AU - Chen, Zhaoyu
AU - Kang, Yan
PY - 2025
DA - 2025/08/01
PB - Elsevier
SP - 130351
VL - 640
SN - 0925-2312
SN - 1872-8286
ER -
Cite this
BibTex (up to 50 authors)
Copy
@article{2025_Lin,
author = {Ying Lin and Tenglong Mao and Ziyi Chen and Hong Lu and Zhaoyu Chen and Yan Kang},
title = {MFF-Net: A multi-view feature fusion network for generalized forgery image detection},
journal = {Neurocomputing},
year = {2025},
volume = {640},
publisher = {Elsevier},
month = {aug},
url = {https://linkinghub.elsevier.com/retrieve/pii/S0925231225010239},
pages = {130351},
doi = {10.1016/j.neucom.2025.130351}
}