Emotion is a crucial physiological attribute in humans, and emotion recognition technology can significantly assist individuals in self-awareness. Addressing the challenge of significant differences in electroencephalogram (EEG) signals among different subjects, we introduce a novel mechanism in the traditional whale optimization algorithm (WOA) to expedite the optimization and convergence of the algorithm. Furthermore, the improved whale optimization algorithm (IWOA) was applied to search for the optimal training solution in the extreme learning machine (ELM) model, encompassing the best feature set, training parameters, and EEG channels. By testing 24 common EEG emotion features, we concluded that optimal EEG emotion features exhibited a certain level of specificity while also demonstrating some commonality among subjects. The proposed method achieved an average recognition accuracy of 92.19% in EEG emotion recognition, significantly reducing the manual tuning workload and offering higher accuracy with shorter training times compared to the control method. It outperformed existing methods, providing a superior performance and introducing a novel perspective for decoding EEG signals, thereby contributing to the field of emotion research from EEG signal.
Emotion recognition refers to the process of determining and identifying an individual's current emotional state by analyzing various signals such as voice, facial expressions, and physiological indicators etc. Using electroencephalogram (EEG) signals and virtual reality (VR) technology for emotion recognition research helps to better understand human emotional changes, enabling applications in areas such as psychological therapy, education, and training to enhance people’s quality of life. However, there is a lack of comprehensive review literature summarizing the combined researches of EEG signals and VR environments for emotion recognition. Therefore, this paper summarizes and synthesizes relevant research from the past five years. Firstly, it introduces the relevant theories of VR and EEG signal emotion recognition. Secondly, it focuses on the analysis of emotion induction, feature extraction, and classification methods in emotion recognition using EEG signals within VR environments. The article concludes by summarizing the research’s application directions and providing an outlook on future development trends, aiming to serve as a reference for researchers in related fields.
To accurately capture and effectively integrate the spatiotemporal features of electroencephalogram (EEG) signals for the purpose of improving the accuracy of EEG-based emotion recognition, this paper proposes a new method combining independent component analysis-recurrence plot with an improved EfficientNet version 2 (EfficientNetV2). First, independent component analysis is used to extract independent components containing spatial information from key channels of the EEG signals. These components are then converted into two-dimensional images using recurrence plot to better extract emotional features from the temporal information. Finally, the two-dimensional images are input into an improved EfficientNetV2, which incorporates a global attention mechanism and a triplet attention mechanism, and the emotion classification is output by the fully connected layer. To validate the effectiveness of the proposed method, this study conducts comparative experiments, channel selection experiments and ablation experiments based on the Shanghai Jiao Tong University Emotion Electroencephalogram Dataset (SEED). The results demonstrate that the average recognition accuracy of our method is 96.77%, which is significantly superior to existing methods, offering a novel perspective for research on EEG-based emotion recognition.