The brain-computer interface (BCI) based on motor imagery electroencephalography (MI-EEG) enables direct information interaction between the human brain and external devices. In this paper, a multi-scale EEG feature extraction convolutional neural network model based on time series data enhancement is proposed for decoding MI-EEG signals. First, an EEG signals augmentation method was proposed that could increase the information content of training samples without changing the length of the time series, while retaining its original features completely. Then, multiple holistic and detailed features of the EEG data were adaptively extracted by multi-scale convolution module, and the features were fused and filtered by parallel residual module and channel attention. Finally, classification results were output by a fully connected network. The application experimental results on the BCI Competition IV 2a and 2b datasets showed that the proposed model achieved an average classification accuracy of 91.87% and 87.85% for the motor imagery task, respectively, which had high accuracy and strong robustness compared with existing baseline models. The proposed model does not require complex signals pre-processing operations and has the advantage of multi-scale feature extraction, which has high practical application value.
Motor imagery (MI) is a mental process that can be recognized by electroencephalography (EEG) without actual movement. It has significant research value and application potential in the field of brain-computer interface (BCI) technology. To address the challenges posed by the non-stationary nature and low signal-to-noise ratio of MI-EEG signals, this study proposed a Riemannian spatial filtering and domain adaptation (RSFDA) method for improving the accuracy and efficiency of cross-session MI-BCI classification tasks. The approach addressed the issue of inconsistent data distribution between source and target domains through a multi-module collaborative framework, which enhanced the generalization capability of cross-session MI-EEG classification models. Comparative experiments were conducted on three public datasets to evaluate RSFDA against eight existing methods in terms of classification accuracy and computational efficiency. The experimental results demonstrated that RSFDA achieved an average classification accuracy of 79.37%, outperforming the state-of-the-art deep learning method Tensor-CSPNet (76.46%) by 2.91% (P < 0.01). Furthermore, the proposed method showed significantly lower computational costs, requiring only approximately 3 minutes of average training time compared to Tensor-CSPNet’s 25 minutes, representing a reduction of 22 minutes. These findings indicate that the RSFDA method demonstrates superior performance in cross-session MI-EEG classification tasks by effectively balancing accuracy and efficiency. However, its applicability in complex transfer learning scenarios remains to be further investigated.
The bidirectional closed-loop motor imagery brain-computer interface (MI-BCI) is an emerging method for active rehabilitation training of motor dysfunction, extensively tested in both laboratory and clinical settings. However, no standardized method for evaluating its rehabilitation efficacy has been established, and relevant literature remains limited. To facilitate the clinical translation of bidirectional closed-loop MI-BCI, this article first introduced its fundamental principles, reviewed the rehabilitation training cycle and methods for evaluating rehabilitation efficacy, and summarized approaches for evaluating system usability, user satisfaction and usage. Finally, the challenges associated with evaluating the rehabilitation efficacy of bidirectional closed-loop MI-BCI were discussed, aiming to promote its broader adoption and standardization in clinical practice.
This paper proposes a motor imagery recognition algorithm based on feature fusion and transfer adaptive boosting (TrAdaboost) to address the issue of low accuracy in motor imagery (MI) recognition across subjects, thereby increasing the reliability of MI-based brain-computer interfaces (BCI) for cross-individual use. Using the autoregressive model, power spectral density and discrete wavelet transform, time-frequency domain features of MI can be obtained, while the filter bank common spatial pattern is used to extract spatial domain features, and multi-scale dispersion entropy is employed to extract nonlinear features. The IV-2a dataset from the 4th International BCI Competition was used for the binary classification task, with the pattern recognition model constructed by combining the improved TrAdaboost integrated learning algorithm with support vector machine (SVM), k nearest neighbor (KNN), and mind evolutionary algorithm-based back propagation (MEA-BP) neural network. The results show that the SVM-based TrAdaboost integrated learning algorithm has the best performance when 30% of the target domain instance data is migrated, with an average classification accuracy of 86.17%, a Kappa value of 0.723 3, and an AUC value of 0.849 8. These results suggest that the algorithm can be used to recognize MI signals across individuals, providing a new way to improve the generalization capability of BCI recognition models.
ObjectiveTo investigate the feasibility and effectiveness of motor imagery based brain computer interface with wrist passive movement in chronic stroke patients with wrist extension impairment.MethodsFifteen chronic stroke patients with a mean age of (47.60±14.66) years were recruited from March 2017 to June 2018. At baseline, motor imagery ability was assessed first. Then motor imagery based brain computer interface with wrist passive movement was given as an intervention. Both range of motion of paretic wrist and Barthel index was assessed before and after the intervention.ResultsAmong the 15 chronic stroke patients admitted in the study, 12 finished the whole therapy, and 3 failed to pass the initial assessment. After the therapy, the 12 participants who completed the whole sessions of the treatment and follow up had improved ability of control electroencephalogram, in whom 9 regained the ability to actively extend the affected wrist, and the other 3 failed to actively extend their wrist (the rate of active extending wrist was 75%). The activity of daily life of all the participants did not change significantly before and after intervention, and no discomfort was found after daily treatment.ConclusionIn chronic stroke patients with wrist extension impairment, motor imagery based brain computer interface with wrist passive movement training is feasible and effective.
Motor imagery electroencephalogram (EEG) signals are non-stationary time series with a low signal-to-noise ratio. Therefore, the single-channel EEG analysis method is difficult to effectively describe the interaction characteristics between multi-channel signals. This paper proposed a deep learning network model based on the multi-channel attention mechanism. First, we performed time-frequency sparse decomposition on the pre-processed data, which enhanced the difference of time-frequency characteristics of EEG signals. Then we used the attention module to map the data in time and space so that the model could make full use of the data characteristics of different channels of EEG signals. Finally, the improved time-convolution network (TCN) was used for feature fusion and classification. The BCI competition IV-2a data set was used to verify the proposed algorithm. The experimental results showed that the proposed algorithm could effectively improve the classification accuracy of motor imagination EEG signals, which achieved an average accuracy of 83.03% for 9 subjects. Compared with the existing methods, the classification accuracy of EEG signals was improved. With the enhanced difference features between different motor imagery EEG data, the proposed method is important for the study of improving classifier performance.
The effective classification of multi-task motor imagery electroencephalogram (EEG) is helpful to achieve accurate multi-dimensional human-computer interaction, and the high frequency domain specificity between subjects can improve the classification accuracy and robustness. Therefore, this paper proposed a multi-task EEG signal classification method based on adaptive time-frequency common spatial pattern (CSP) combined with convolutional neural network (CNN). The characteristics of subjects' personalized rhythm were extracted by adaptive spectrum awareness, and the spatial characteristics were calculated by using the one-versus-rest CSP, and then the composite time-domain characteristics were characterized to construct the spatial-temporal frequency multi-level fusion features. Finally, the CNN was used to perform high-precision and high-robust four-task classification. The algorithm in this paper was verified by the self-test dataset containing 10 subjects (33 ± 3 years old, inexperienced) and the dataset of the 4th 2018 Brain-Computer Interface Competition (BCI competition Ⅳ-2a). The average accuracy of the proposed algorithm for the four-task classification reached 93.96% and 84.04%, respectively. Compared with other advanced algorithms, the average classification accuracy of the proposed algorithm was significantly improved, and the accuracy range error between subjects was significantly reduced in the public dataset. The results show that the proposed algorithm has good performance in multi-task classification, and can effectively improve the classification accuracy and robustness.
Transfer learning is provided with potential research value and application prospect in motor imagery electroencephalography (MI-EEG)-based brain-computer interface (BCI) rehabilitation system, and the source domain classification model and transfer strategy are the two important aspects that directly affect the performance and transfer efficiency of the target domain model. Therefore, we propose a parameter transfer learning method based on shallow visual geometry group network (PTL-sVGG). First, Pearson correlation coefficient is used to screen the subjects of the source domain, and the short-time Fourier transform is performed on the MI-EEG data of each selected subject to acquire the time-frequency spectrogram images (TFSI). Then, the architecture of VGG-16 is simplified and the block design is carried out, and the modified sVGG model is pre-trained with TFSI of source domain. Furthermore, a block-based frozen-fine-tuning transfer strategy is designed to quickly find and freeze the block with the greatest contribution to sVGG model, and the remaining blocks are fine-tuned by using TFSI of target subjects to obtain the target domain classification model. Extensive experiments are conducted based on public MI-EEG datasets, the average recognition rate and Kappa value of PTL-sVGG are 94.9% and 0.898, respectively. The results show that the subjects’ optimization is beneficial to improve the model performance in source domain, and the block-based transfer strategy can enhance the transfer efficiency, realizing the rapid and effective transfer of model parameters across subjects on the datasets with different number of channels. It is beneficial to reduce the calibration time of BCI system, which promote the application of BCI technology in rehabilitation engineering.
The brain-computer interface (BCI) based on motor imagery electroencephalography (EEG) shows great potential in neurorehabilitation due to its non-invasive nature and ease of use. However, motor imagery EEG signals have low signal-to-noise ratios and spatiotemporal resolutions, leading to low decoding recognition rates with traditional neural networks. To address this, this paper proposed a three-dimensional (3D) convolutional neural network (CNN) method that learns spatial-frequency feature maps, using Welch method to calculate the power spectrum of EEG frequency bands, converted time-series EEG into a brain topographical map with spatial-frequency information. A 3D network with one-dimensional and two-dimensional convolutional layers was designed to effectively learn these features. Comparative experiments demonstrated that the average decoding recognition rate reached 86.89%, outperforming traditional methods and validating the effectiveness of this approach in motor imagery EEG decoding.
Clinical grading diagnosis of disorder of consciousness (DOC) patients relies on behavioral assessment, which has certain limitations. Combining multi-modal technologies and brain-computer interface (BCI) paradigms can assist in identifying patients with minimally conscious state (MCS) and vegetative state (VS). This study collected electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) signals under motor BCI paradigms from 14 DOC patients, who were divided into two groups based on clinical scores: 7 in the MCS group and 7 in the VS group. We calculated event-related desynchronization (ERD) and motor decoding accuracy to analyze the effectiveness of motor BCI paradigms in detecting consciousness states. The results showed that the classification accuracies for left-hand and right-hand movement tasks using EEG were 93.28% and 76.19% for the MCS and VS groups, respectively; the classification precisions using fNIRS were 53.72% and 49.11% for these groups. When combining EEG and fNIRS features, the classification accuracies for left-hand and right-hand movement tasks in the MCS and VS groups were 95.56% and 87.38%, respectively. Although there was no statistically significant difference in motor decoding accuracy between the two groups, significant differences in ERD were observed between different consciousness states during left-hand movement tasks (P < 0.001). This study demonstrates that motor BCI paradigms can assist in assessing the level of consciousness, with EEG being more sensitive for evaluating residual motor intention intensity. Moreover, the ERD feature of motor intention intensity is more sensitive than BCI classification accuracy.