Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer
Jun Shi
1
,
Dongdong Sun
2
,
Zhiguo Jiang
3
,
Jun Du
4, 5
,
Wei WANG
4, 5
,
Yushan Zheng
6
,
Hai Bo Wu
4, 5, 7
Publication type: Journal Article
Publication date: 2025-04-01
scimago Q1
wos Q1
SJR: 1.270
CiteScore: 10.9
Impact factor: 4.9
ISSN: 08956111, 18790771
Abstract
Human epidermal growth factor receptor 2 (HER2) is an important biomarker for prognosis and prediction of treatment response in breast cancer (BC). HER2 scoring is typically evaluated by pathologist microscopic observation on immunohistochemistry (IHC) images, which is labor-intensive and results in observational biases among different pathologists. Most existing methods generally use hand-crafted features or deep learning models in unimodal (hematoxylin and eosin (H&E) or IHC) to predict HER2 scores through supervised or weakly supervised learning. Consequently, the information from different modalities is not effectively integrated into feature learning which can help improve HER2 scoring performance. In this paper, we propose a novel weakly supervised multi-modal contrastive learning (WSMCL) framework to predict the HER2 scores in BC at the whole slide image (WSI) level. It aims to leverage multi-modal (H&E and IHC) joint learning under the weak supervision of WSI label to achieve the HER2 score prediction. Specifically, the patch features within H&E and IHC WSIs are respectively extracted and then the multi-head self-attention (MHSA) is used to explore the global dependencies of the patches within each modality. The patch features corresponding to top-k and bottom-k attention scores generated by MHSA in each modality are selected as the candidates for multi-modal joint learning. Particularly, a multi-modal attentive contrastive learning (MACL) module is designed to guarantee the semantic alignment of the candidate features from different modalities. Extensive experiments demonstrate the proposed WSMCL has the better HER2 scoring performance and outperforms the state-of-the-art methods. The code is available at https://github.com/HFUT-miaLab/WSMCL.
Found
Nothing found, try to update filter.
Found
Nothing found, try to update filter.
Top-30
Publishers
|
1
|
|
|
Cold Spring Harbor Laboratory
1 publication, 100%
|
|
|
1
|
- We do not take into account publications without a DOI.
- Statistics recalculated weekly.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
1
Total citations:
1
Citations from 2024:
1
(100%)
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Shi J. et al. Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer // Computerized Medical Imaging and Graphics. 2025. Vol. 121. p. 102502.
GOST all authors (up to 50)
Copy
Shi J., Sun D., Jiang Z., Du J., WANG W., Zheng Y., Wu H. B. Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer // Computerized Medical Imaging and Graphics. 2025. Vol. 121. p. 102502.
Cite this
RIS
Copy
TY - JOUR
DO - 10.1016/j.compmedimag.2025.102502
UR - https://linkinghub.elsevier.com/retrieve/pii/S0895611125000114
TI - Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer
T2 - Computerized Medical Imaging and Graphics
AU - Shi, Jun
AU - Sun, Dongdong
AU - Jiang, Zhiguo
AU - Du, Jun
AU - WANG, Wei
AU - Zheng, Yushan
AU - Wu, Hai Bo
PY - 2025
DA - 2025/04/01
PB - Elsevier
SP - 102502
VL - 121
SN - 0895-6111
SN - 1879-0771
ER -
Cite this
BibTex (up to 50 authors)
Copy
@article{2025_Shi,
author = {Jun Shi and Dongdong Sun and Zhiguo Jiang and Jun Du and Wei WANG and Yushan Zheng and Hai Bo Wu},
title = {Weakly supervised multi-modal contrastive learning framework for predicting the HER2 scores in breast cancer},
journal = {Computerized Medical Imaging and Graphics},
year = {2025},
volume = {121},
publisher = {Elsevier},
month = {apr},
url = {https://linkinghub.elsevier.com/retrieve/pii/S0895611125000114},
pages = {102502},
doi = {10.1016/j.compmedimag.2025.102502}
}
Profiles