A Smooth Conditional Domain Adversarial Training Framework for EEG Motor Imagery Decoding | IEEE Conference Publication | IEEE Xplore

A Smooth Conditional Domain Adversarial Training Framework for EEG Motor Imagery Decoding


Abstract:

The brain-computer interface (BCI) based on electroencephalogram (EEG) motor imagery (MI) decoding demonstrates promising application potential. However, the domain shift...Show More

Abstract:

The brain-computer interface (BCI) based on electroencephalogram (EEG) motor imagery (MI) decoding demonstrates promising application potential. However, the domain shift between training and testing data significantly impacts the model’s decoding efficacy. Domain adaption (DA) has been developed to address this problem recently. Nevertheless, existing DA methods have two limitations. One is that the extracted features are noisy, and the other is that they only align the distribution of features, which leads to limited generalization ability of the model. In this paper, we propose a novel smooth conditional domain adversarial training framework for solving the motor imagery decoding problem under domain shift. The framework uses interactive frequency convolution and channel attention mechanism as feature extractors to obtain effective features, and integrates smooth conditional domain adversarial training with batch spectral penalty to align the joint distribution of features and classes. At the same time, self-iterative training is implemented by generating pseudo-labels and selective outlier removal. Experimental results demonstrate that our proposed framework achieves 80.67% and 86.17% average accuracy in the BCI IV 2a and 2b respectively for cross-session experiments, achieving the best results compared with other methods, proving that the framework can improve the classification ability on the target domain while transferring effective features.
Date of Conference: 03-06 December 2024
Date Added to IEEE Xplore: 10 January 2025
ISBN Information:

ISSN Information:

Conference Location: Lisbon, Portugal

Funding Agency:


I. Introduction

Brain-computer interface (BCI) based on motor imagery (MI) paradigm can recognize human motion intention by decoding electroencephalogram (EEG) signals, and it increasingly plays a vital role in neurological rehabilitation. However, the non-static nature of EEG signals, which exhibit significant variability across different sessions and subjects, presents a substantial challenge. This variability makes it difficult to develop an efficient method for MI decoding that works reliably across different subjects and sessions. [1]. With the development of deep learning (DL), many studies devoted to designing adaptable models to mitigate the effects of variability of EEG signals by learning robust features. For example, Altaheri et al. [2] employed an attention temporal convolutional network for MI cross-session classification. Zhang et al. [3] used a bidirectional recurrent neural network (RNN) to distinguish brain states. Additionally, Shi et al. [4] proposed a multiband EEG Transformer that employs temporal self-attention and spatial self-attention to decode brain states. Despite these advancements, DL models remain sensitive to the training data and are prone to overfitting on testing data. This sensitivity becomes particularly problematic when the models are applied to new subjects or sessions, referred to as domains, leading to performance degradation due to domain shift. Consequently, while DL models have made significant strides in MI decoding, cross-session and cross-subject brain states decoding remains one of the challenges in EEG field.

References

References is not available for this document.