The brain-computer interface (BCI) based on motor imagery electroencephalography (EEG) shows great potential in neurorehabilitation due to its non-invasive nature and ease of use. However, motor imagery EEG signals have low signal-to-noise ratios and spatiotemporal resolutions, leading to low decoding recognition rates with traditional neural networks. To address this, this paper proposed a three-dimensional (3D) convolutional neural network (CNN) method that learns spatial-frequency feature maps, using Welch method to calculate the power spectrum of EEG frequency bands, converted time-series EEG into a brain topographical map with spatial-frequency information. A 3D network with one-dimensional and two-dimensional convolutional layers was designed to effectively learn these features. Comparative experiments demonstrated that the average decoding recognition rate reached 86.89%, outperforming traditional methods and validating the effectiveness of this approach in motor imagery EEG decoding.
Patients with amyotrophic lateral sclerosis ( ALS ) often have difficulty in expressing their intentions through language and behavior, which prevents them from communicating properly with the outside world and seriously affects their quality of life. The brain-computer interface (BCI) has received much attention as an aid for ALS patients to communicate with the outside world, but the heavy device causes inconvenience to patients in the application process. To improve the portability of the BCI system, this paper proposed a wearable P300-speller brain-computer interface system based on the augmented reality (MR-BCI). This system used Hololens2 augmented reality device to present the paradigm, an OpenBCI device to capture EEG signals, and Jetson Nano embedded computer to process the data. Meanwhile, to optimize the system’s performance for character recognition, this paper proposed a convolutional neural network classification method with low computational complexity applied to the embedded system for real-time classification. The results showed that compared with the P300-speller brain-computer interface system based on the computer screen (CS-BCI), MR-BCI induced an increase in the amplitude of the P300 component, an increase in accuracy of 1.7% and 1.4% in offline and online experiments, respectively, and an increase in the information transfer rate of 0.7 bit/min. The MR-BCI proposed in this paper achieves a wearable BCI system based on guaranteed system performance. It has a positive effect on the realization of the clinical application of BCI.
With the breakthroughs of digitization, artificial intelligence and other technologies and the gradual expansion of application fields, more and more studies have been conducted on the application of digital intelligence technologies such as exoskeleton robots, brain-computer interface, and spinal cord neuromodulation to improve or compensate physical function after spinal cord injury (SCI) and improve self-care ability and quality of life of patients with SCI. The development of digital intelligent rehabilitation technology provides a new application platform for the functional reconstruction after SCI, and the digital and intelligentized rehabilitation technology has broad application prospects in the clinical rehabilitation treatment after SCI. This article elaborates on the current status of exoskeleton robots, brain-computer interface technology, and spinal cord neuromodulation technology for functional recovery after SCI.
Using electroencephalogram (EEG) signal to control external devices has always been the research focus in the field of brain-computer interface (BCI). This is especially significant for those disabilities who have lost capacity of movements. In this paper, the P300-based BCI and the microcontroller-based wireless radio frequency (RF) technology are utilized to design a smart home control system, which can be used to control household appliances, lighting system, and security devices directly. Experiment results showed that the system was simple, reliable and easy to be populirised.
Speech imagery is an emerging brain-computer interface (BCI) paradigm with potential to provide effective communication for individuals with speech impairments. This study designed a Chinese speech imagery paradigm using three clinically relevant words—“Help me”, “Sit up” and “Turn over”—and collected electroencephalography (EEG) data from 15 healthy subjects. Based on the data, a Channel Attention Multi-Scale Convolutional Neural Network (CAM-Net) decoding algorithm was proposed, which combined multi-scale temporal convolutions with asymmetric spatial convolutions to extract multidimensional EEG features, and incorporated a channel attention mechanism along with a bidirectional long short-term memory network to perform channel weighting and capture temporal dependencies. Experimental results showed that CAM-Net achieved a classification accuracy of 48.54% in the three-class task, outperforming baseline models such as EEGNet and Deep ConvNet, and reached a highest accuracy of 64.17% in the binary classification between “Sit up” and “Turn over”. This work provides a promising approach for future Chinese speech imagery BCI research and applications.
The brain-computer interface (BCI) based on motor imagery electroencephalography (MI-EEG) enables direct information interaction between the human brain and external devices. In this paper, a multi-scale EEG feature extraction convolutional neural network model based on time series data enhancement is proposed for decoding MI-EEG signals. First, an EEG signals augmentation method was proposed that could increase the information content of training samples without changing the length of the time series, while retaining its original features completely. Then, multiple holistic and detailed features of the EEG data were adaptively extracted by multi-scale convolution module, and the features were fused and filtered by parallel residual module and channel attention. Finally, classification results were output by a fully connected network. The application experimental results on the BCI Competition IV 2a and 2b datasets showed that the proposed model achieved an average classification accuracy of 91.87% and 87.85% for the motor imagery task, respectively, which had high accuracy and strong robustness compared with existing baseline models. The proposed model does not require complex signals pre-processing operations and has the advantage of multi-scale feature extraction, which has high practical application value.
Clinical grading diagnosis of disorder of consciousness (DOC) patients relies on behavioral assessment, which has certain limitations. Combining multi-modal technologies and brain-computer interface (BCI) paradigms can assist in identifying patients with minimally conscious state (MCS) and vegetative state (VS). This study collected electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) signals under motor BCI paradigms from 14 DOC patients, who were divided into two groups based on clinical scores: 7 in the MCS group and 7 in the VS group. We calculated event-related desynchronization (ERD) and motor decoding accuracy to analyze the effectiveness of motor BCI paradigms in detecting consciousness states. The results showed that the classification accuracies for left-hand and right-hand movement tasks using EEG were 93.28% and 76.19% for the MCS and VS groups, respectively; the classification precisions using fNIRS were 53.72% and 49.11% for these groups. When combining EEG and fNIRS features, the classification accuracies for left-hand and right-hand movement tasks in the MCS and VS groups were 95.56% and 87.38%, respectively. Although there was no statistically significant difference in motor decoding accuracy between the two groups, significant differences in ERD were observed between different consciousness states during left-hand movement tasks (P < 0.001). This study demonstrates that motor BCI paradigms can assist in assessing the level of consciousness, with EEG being more sensitive for evaluating residual motor intention intensity. Moreover, the ERD feature of motor intention intensity is more sensitive than BCI classification accuracy.
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.
The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competitionⅡdataset 4 and BCI competitionⅣdataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency EEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.