volume 10 issue 3 pages 2184-2190

Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing

XuDong Shi 1
Qi Kang 2
MengChu Zhou 3
Hanqiu Bao 2
Jing An 4
Abdullah Abusorrah 5
Kefan Wang 6
Yusuf Al-Turki 7
Yusuf Al‐Turki 5
Publication typeJournal Article
Publication date2025-03-01
scimago Q1
wos Q1
SJR1.481
CiteScore10.3
Impact factor5.3
ISSN23773766, 23773774
Abstract
Soft sensing is a promising solution to predict key quality variables in various industries. One of the major obstacles to building an accurate data-driven soft sensor is the scarcity of labeled data and the challenge of extracting useful information from unlabeled data. To mitigate this issue, this work proposes a semi-supervised soft sensing method called dual attention-aided cooperative deep spatiotemporal-feature-extraction network. It leverages an encoder-decoder structure to explicitly exploit the spatial and temporal information in both labeled and unlabeled data, enabling efficient utilization of the latter for prediction performance improvement. The encoder is used to realize more detailed spatiotemporal dependencies capture while a dual attention mechanism is developed for feature extraction. In addition, a gated neuron is added between the encoder and decoder to boost model accuracy by quantifying the contributions of extracted features and adaptively fusing them. To optimize our model while incorporating both labeled and unlabeled data, a mixed form loss is employed in the decoder. Experiments are carried out on a real-life industrial process. The results demonstrate that our proposed model achieves state-of-the-art performance.
Found 
Found 

Top-30

Publishers

1
Institute of Electrical and Electronics Engineers (IEEE)
1 publication, 100%
1
  • We do not take into account publications without a DOI.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
1
Share
Cite this
GOST |
Cite this
GOST Copy
Shi X. et al. Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing // IEEE Robotics and Automation Letters. 2025. Vol. 10. No. 3. pp. 2184-2190.
GOST all authors (up to 50) Copy
Shi X., Kang Q., Zhou M., Bao H., An J., Abusorrah A., Wang K., Al-Turki Y., Al‐Turki Y. Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing // IEEE Robotics and Automation Letters. 2025. Vol. 10. No. 3. pp. 2184-2190.
RIS |
Cite this
RIS Copy
TY - JOUR
DO - 10.1109/lra.2024.3524901
UR - https://ieeexplore.ieee.org/document/10819627/
TI - Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing
T2 - IEEE Robotics and Automation Letters
AU - Shi, XuDong
AU - Kang, Qi
AU - Zhou, MengChu
AU - Bao, Hanqiu
AU - An, Jing
AU - Abusorrah, Abdullah
AU - Wang, Kefan
AU - Al-Turki, Yusuf
AU - Al‐Turki, Yusuf
PY - 2025
DA - 2025/03/01
PB - Institute of Electrical and Electronics Engineers (IEEE)
SP - 2184-2190
IS - 3
VL - 10
SN - 2377-3766
SN - 2377-3774
ER -
BibTex |
Cite this
BibTex (up to 50 authors) Copy
@article{2025_Shi,
author = {XuDong Shi and Qi Kang and MengChu Zhou and Hanqiu Bao and Jing An and Abdullah Abusorrah and Kefan Wang and Yusuf Al-Turki and Yusuf Al‐Turki},
title = {Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing},
journal = {IEEE Robotics and Automation Letters},
year = {2025},
volume = {10},
publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
month = {mar},
url = {https://ieeexplore.ieee.org/document/10819627/},
number = {3},
pages = {2184--2190},
doi = {10.1109/lra.2024.3524901}
}
MLA
Cite this
MLA Copy
Shi, XuDong, et al. “Dual Attention-Aided Cooperative Deep-Spatiotemporal-Feature-Extraction Network for Semi-Supervised Soft Sensing.” IEEE Robotics and Automation Letters, vol. 10, no. 3, Mar. 2025, pp. 2184-2190. https://ieeexplore.ieee.org/document/10819627/.