том 183 страницы 109260

Attention Induced Dual Convolutional-Capsule Network (AIDC-CN): A deep learning framework for motor imagery classification

Тип публикацииJournal Article
Дата публикации2024-12-01
scimago Q1
wos Q1
white level БС1
SJR1.447
CiteScore13
Impact factor6.3
ISSN00104825, 18790534
Краткое описание
In recent times, Electroencephalography (EEG)-based motor imagery (MI) decoding has garnered significant attention due to its extensive applicability in healthcare, including areas such as assistive robotics and rehabilitation engineering. Nevertheless, the decoding of EEG signals presents considerable challenges owing to their inherent complexity, non-stationary characteristics, and low signal-to-noise ratio. Notably, deep learning-based classifiers have emerged as a prominent focus for addressing the EEG signal decoding process. This study introduces a novel deep learning classifier named the Attention Induced Dual Convolutional-Capsule Network (AIDC-CN) with the specific aim of accurately categorizing various motor imagination class labels. To enhance the classifier’s performance, a dual feature extraction approach leveraging spectrogram and brain connectivity networks has been employed, diversifying the feature set in the classification task. The main highlights of the proposed AIDC-CN classifier includes the introduction of a dual convolution layer to handle the brain connectivity and spectrogram features, addition of a novel self-attention module (SAM) to accentuate the relevant parts of the convolved spectrogram features, introduction of a new cross-attention module (CAM) to refine the outputs obtained from the dual convolution layers and incorporation of a Gaussian Error Linear Unit (GELU) based dynamic routing algorithm to strengthen the coupling among the primary and secondary capsule layers. Performance analysis undertaken on four public data sets depict the superior performance of the proposed model with respect to the state-of-the-art techniques. The code for this model is available at https://github.com/RiteshSurChowdhury/AIDC-CN.
Для доступа к списку цитирований публикации необходимо авторизоваться.

Топ-30

Журналы

1
2
IEEE Access
2 публикации, 100%
1
2

Издатели

1
2
Institute of Electrical and Electronics Engineers (IEEE)
2 публикации, 100%
1
2
  • Мы не учитываем публикации, у которых нет DOI.
  • Статистика публикаций обновляется еженедельно.

Вы ученый?

Создайте профиль, чтобы получать персональные рекомендации коллег, конференций и новых статей.
Метрики
2
Поделиться
Цитировать
ГОСТ |
Цитировать
Chowdhury R. S. et al. Attention Induced Dual Convolutional-Capsule Network (AIDC-CN): A deep learning framework for motor imagery classification // Computers in Biology and Medicine. 2024. Vol. 183. p. 109260.
ГОСТ со всеми авторами (до 50) Скопировать
Chowdhury R. S., Bose S., Ghosh S., Konar A. Attention Induced Dual Convolutional-Capsule Network (AIDC-CN): A deep learning framework for motor imagery classification // Computers in Biology and Medicine. 2024. Vol. 183. p. 109260.
RIS |
Цитировать
TY - JOUR
DO - 10.1016/j.compbiomed.2024.109260
UR - https://linkinghub.elsevier.com/retrieve/pii/S0010482524013453
TI - Attention Induced Dual Convolutional-Capsule Network (AIDC-CN): A deep learning framework for motor imagery classification
T2 - Computers in Biology and Medicine
AU - Chowdhury, Ritesh Sur
AU - Bose, Shirsha
AU - Ghosh, Sayantani
AU - Konar, Amit
PY - 2024
DA - 2024/12/01
PB - Elsevier
SP - 109260
VL - 183
PMID - 39426071
SN - 0010-4825
SN - 1879-0534
ER -
BibTex
Цитировать
BibTex (до 50 авторов) Скопировать
@article{2024_Chowdhury,
author = {Ritesh Sur Chowdhury and Shirsha Bose and Sayantani Ghosh and Amit Konar},
title = {Attention Induced Dual Convolutional-Capsule Network (AIDC-CN): A deep learning framework for motor imagery classification},
journal = {Computers in Biology and Medicine},
year = {2024},
volume = {183},
publisher = {Elsevier},
month = {dec},
url = {https://linkinghub.elsevier.com/retrieve/pii/S0010482524013453},
pages = {109260},
doi = {10.1016/j.compbiomed.2024.109260}
}
Ошибка в публикации?