Fractal Feature Modeling: Multi-Dimensional Attention and Spatial Adaptive Relationship Learning for Person Re-Identification

Mingfu Xiong 1, 2
Hanmei Chen 3
Yeliz Karaca 4
Ik Hyun Lee 6, 7
Javier Del Ser 8, 9
Тип публикацииJournal Article
Дата публикации2025-01-10
scimago Q1
wos Q1
БС1
SJR0.636
CiteScore8.0
Impact factor2.9
ISSN0218348X, 17936543
Краткое описание

Person Re-identification (person Re-ID), as an intelligent video surveillance technology capable of retrieving the same person from different cameras, may bring about some challenges arising from the changes in the person’s poses, different camera views as well as occlusion. Recently, person Re-ID equipped with the attention mechanism has gradually emerged as one of the most active areas of study in the fields of computer vision and fractal feature modeling applications. Despite the upsurge in related research, existing fractal-attention-based methods still face two major challenges when recognizing different pedestrians in unpredictable realistic environments: (1) the adaptability of a single local attention feature to hostile scenes cannot be guaranteed, and (2) the existing methods originating from attention features usually rely on the line mapping or simple variants, which make it difficult to excavate the association relationships among pedestrians with similar appearance attributes. To address these issues, this paper proposes a simple effective fractal feature modeling method, named multi-dimensional attention and spatial adaptive relationship learning framework (MASARF) to explore the correlation between pedestrian bodies for person Re-ID. The proposed framework encompasses a multi-dimensional fractal-attention feature learning model (MDAM) and a dual-branch graph convolutional model (DGCM). In particular, the MDAM comprises the local and global attention modules, which are used to capture multi-dimensional attention features for each person. Subsequently, the DGCM is used to construct the nonlinear mapping association relationships among the various body regions for each person via a dual-branch graph convolutional optimization strategy. Extensive experiments were conducted using public person Re-ID datasets (Market-1501, DukeMTMC-reid, and CUHK-03). The results demonstrate that the performance of the proposed approach is superior to that of state-of-the-art methods between 2% and 10% at Rank-1 (mAP). Essential differences exist between our method and the existing methods in terms of feature extraction and relationship transformation, which provides the validation of its novelty in the person Re-ID domain.

Вы ученый?

Создайте профиль, чтобы получать персональные рекомендации коллег, конференций и новых статей.
Метрики
0
Поделиться
Цитировать
ГОСТ |
Цитировать
Xiong M. et al. Fractal Feature Modeling: Multi-Dimensional Attention and Spatial Adaptive Relationship Learning for Person Re-Identification // Fractals. 2025.
ГОСТ со всеми авторами (до 50) Скопировать
Xiong M., Chen H., Karaca Y., Saudagar A. K. J., Lee I. H., Ser J. D., Muhammad K. Fractal Feature Modeling: Multi-Dimensional Attention and Spatial Adaptive Relationship Learning for Person Re-Identification // Fractals. 2025.
RIS |
Цитировать
TY - JOUR
DO - 10.1142/s0218348x24400589
UR - https://www.worldscientific.com/doi/10.1142/S0218348X24400589
TI - Fractal Feature Modeling: Multi-Dimensional Attention and Spatial Adaptive Relationship Learning for Person Re-Identification
T2 - Fractals
AU - Xiong, Mingfu
AU - Chen, Hanmei
AU - Karaca, Yeliz
AU - Saudagar, Abdul Khader Jilani
AU - Lee, Ik Hyun
AU - Ser, Javier Del
AU - Muhammad, Khan
PY - 2025
DA - 2025/01/10
PB - World Scientific
SN - 0218-348X
SN - 1793-6543
ER -
BibTex
Цитировать
BibTex (до 50 авторов) Скопировать
@article{2025_Xiong,
author = {Mingfu Xiong and Hanmei Chen and Yeliz Karaca and Abdul Khader Jilani Saudagar and Ik Hyun Lee and Javier Del Ser and Khan Muhammad},
title = {Fractal Feature Modeling: Multi-Dimensional Attention and Spatial Adaptive Relationship Learning for Person Re-Identification},
journal = {Fractals},
year = {2025},
publisher = {World Scientific},
month = {jan},
url = {https://www.worldscientific.com/doi/10.1142/S0218348X24400589},
doi = {10.1142/s0218348x24400589}
}