Multi-Domain Features and Multi-Task Learning for Steady-State Visual Evoked Potential-Based Brain–Computer Interfaces
Brain–computer interfaces (BCIs) enable people to communicate with others or devices, and improving BCI performance is essential for developing real-life applications. In this study, a steady-state visual evoked potential-based BCI (SSVEP-based BCI) with multi-domain features and multi-task learning is developed. To accurately represent the characteristics of an SSVEP signal, SSVEP signals in the time and frequency domains are selected as multi-domain features. Convolutional neural networks are separately used for time and frequency domain signals to extract the embedding features effectively. An element-wise addition operation and batch normalization are applied to fuse the time- and frequency-domain features. A sequence of convolutional neural networks is then adopted to find discriminative embedding features for classification. Finally, multi-task learning-based neural networks are used to detect the corresponding stimuli correctly. The experimental results showed that the proposed approach outperforms EEGNet, multi-task learning-based neural networks, canonical correlation analysis (CCA), and filter bank CCA (FBCCA). Additionally, the proposed approach is more suitable for developing real-time BCIs than a system where an input’s duration is 4 s. In the future, utilizing multi-task learning to learn the properties of the embedding features extracted from FBCCA can further improve the BCI system performance.