Affective brain-computer interfaces (aBCIs) has important application value in the field of human-computer interaction. Electroencephalogram (EEG) has been widely concerned in the field of emotion recognition due to its advantages in time resolution, reliability and accuracy. However, the non-stationary characteristics and individual differences of EEG limit the generalization of emotion recognition model in different time and different subjects. In this paper, in order to realize the recognition of emotional states across different subjects and sessions, we proposed a new domain adaptation method, the maximum classifier difference for domain adversarial neural networks (MCD_DA). By establishing a neural network emotion recognition model, the shallow feature extractor was used to resist the domain classifier and the emotion classifier, respectively, so that the feature extractor could produce domain invariant expression, and train the decision boundary of classifier learning task specificity while realizing approximate joint distribution adaptation. The experimental results showed that the average classification accuracy of this method was 88.33% compared with 58.23% of the traditional general classifier. It improves the generalization ability of emotion brain-computer interface in practical application, and provides a new method for aBCIs to be used in practice.
Brain-computer interface (BCI) can be summarized as a system that uses online brain information to realize communication between brain and computer. BCI has experienced nearly half a century of development, although it now has a high degree of awareness in the public, but the application of BCI in the actual scene is still very limited. This collection invited some BCI teams in China to report their efforts to promote BCI from laboratory to real scene. This paper summarizes the main contents of the invited papers, and looks forward to the future of BCI.
Error self-detection based on error-related potentials (ErrP) is promising to improve the practicability of brain-computer interface systems. But the single trial recognition of ErrP is still a challenge that hinters the development of this technology. To assess the performance of different algorithms on decoding ErrP, this paper test four kinds of linear discriminant analysis algorithms, two kinds of support vector machines, logistic regression, and discriminative canonical pattern matching (DCPM) on two open accessed datasets. All algorithms were evaluated by their classification accuracies and their generalization ability on different sizes of training sets. The study results show that DCPM has the best performance. This study shows a comprehensive comparison of different algorithms on ErrP classification, which could give guidance for the selection of ErrP algorithm.
Brain-controlled wheelchair (BCW) is one of the important applications of brain-computer interface (BCI) technology. The present research shows that simulation control training is of great significance for the application of BCW. In order to improve the BCW control ability of users and promote the application of BCW under the condition of safety, this paper builds an indoor simulation training system based on the steady-state visual evoked potentials for BCW. The system includes visual stimulus paradigm design and implementation, electroencephalogram acquisition and processing, indoor simulation environment modeling, path planning, and simulation wheelchair control, etc. To test the performance of the system, a training experiment involving three kinds of indoor path-control tasks is designed and 10 subjects were recruited for the 5-day training experiment. By comparing the results before and after the training experiment, it was found that the average number of commands in Task 1, Task 2, and Task 3 decreased by 29.5%, 21.4%, and 25.4%, respectively (P < 0.001). And the average number of commands used by the subjects to complete all tasks decreased by 25.4% (P < 0.001). The experimental results show that the training of subjects through the indoor simulation training system built in this paper can improve their proficiency and efficiency of BCW control to a certain extent, which verifies the practicability of the system and provides an effective assistant method to promote the indoor application of BCW.
Brain–computer interface (BCI) technology enable humans to interact with external devices by decoding their brain signals. Despite it has made some significant breakthroughs in recent years, there are still many obstacles in its applications and extensions. The current used BCI control signals are generally derived from the brain areas involved in primary sensory- or motor-related processing. However, these signals only reflect a limited range of limb movement intention. Therefore, additional sources of brain signals for controlling BCI systems need to be explored. Brain signals derived from the cognitive brain areas are more intuitive and effective. These signals can be used for expand the brain signal sources as a new approach. This paper reviewed the research status of cognitive BCI based on the single brain area and multiple hybrid brain areas, and summarized its applications in the rehabilitation medicine. It’s believed that cognitive BCI technologies would become a possible breakthrough for future BCI rehabilitation applications.
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.
Regarding to the channel selection problem during the classification of electroencephalogram (EEG) signals, we proposed a novel method, Relief-SBS, in this paper. Firstly, the proposed method performed EEG channel selection by combining the principles of Relief and sequential backward selection (SBS) algorithms. And then correlation coefficient was used for classification of EEG signals. The selected channels that achieved optimal classification accuracy were considered as optimal channels. The data recorded from motor imagery task experiments were analyzed, and the results showed that the channels selected with our proposed method achieved excellent classification accuracy, and also outperformed other feature selection methods. In addition, the distribution of the optimal channels was proved to be consistent with the neurophysiological knowledge. This demonstrates the effectiveness of our method. It can be well concluded that our proposed method, Relief-SBS, provides a new way for channel selection.
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups ofⅣ_ⅢandⅣ_Ⅰ. The experimental results proved that the method proposed in this paper was feasible.
The development and potential application of brain-computer interface (BCI) technology is closely related to the human brain, so that the ethical regulation of BCI has become an important issue attracting the consideration of society. Existing literatures have discussed the ethical norms of BCI technology from the perspectives of non-BCI developers and scientific ethics, while few discussions have been launched from the perspective of BCI developers. Therefore, there is a great need to study and discuss the ethical norms of BCI technology from the perspective of BCI developers. In this paper, we present the user-centered and non-harmful BCI technology ethics, and then discuss and look forward on them. This paper argues that human beings can cope with the ethical issues arising from BCI technology, and as BCI technology develops, its ethical norms will be improved continuously. It is expected that this paper can provide thoughts and references for the formulation of ethical norms related to BCI technology.
In recent years, hybrid brain-computer interfaces (BCIs) have gained significant attention due to their demonstrated advantages in increasing the number of targets and enhancing robustness of the systems. However, Existing studies usually construct BCI systems using intense auditory stimulation and strong central visual stimulation, which lead to poor user experience and indicate a need for improving system comfort. Studies have proved that the use of peripheral visual stimulation and lower intensity of auditory stimulation can effectively boost the user’s comfort. Therefore, this study used high-frequency peripheral visual stimulation and 40-dB weak auditory stimulation to elicit steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) signals, building a high-comfort hybrid BCI based on weak audio-visual evoked responses. This system coded 40 targets via 20 high-frequency visual stimulation frequencies and two auditory stimulation frequencies, improving the coding efficiency of BCI systems. Results showed that the hybrid system’s averaged classification accuracy was (78.00 ± 12.18) %, and the information transfer rate (ITR) could reached 27.47 bits/min. This study offers new ideas for the design of hybrid BCI paradigm based on imperceptible stimulation.