Speech imagery is an emerging brain-computer interface (BCI) paradigm with potential to provide effective communication for individuals with speech impairments. This study designed a Chinese speech imagery paradigm using three clinically relevant words—“Help me”, “Sit up” and “Turn over”—and collected electroencephalography (EEG) data from 15 healthy subjects. Based on the data, a Channel Attention Multi-Scale Convolutional Neural Network (CAM-Net) decoding algorithm was proposed, which combined multi-scale temporal convolutions with asymmetric spatial convolutions to extract multidimensional EEG features, and incorporated a channel attention mechanism along with a bidirectional long short-term memory network to perform channel weighting and capture temporal dependencies. Experimental results showed that CAM-Net achieved a classification accuracy of 48.54% in the three-class task, outperforming baseline models such as EEGNet and Deep ConvNet, and reached a highest accuracy of 64.17% in the binary classification between “Sit up” and “Turn over.” This work provides a promising approach for future Chinese speech imagery BCI research and applications.
Post-stroke aphasia is associated with a significantly elevated risk of depression, yet the underlying mechanisms remain unclear. This study recorded 64-channel electroencephalogram data and depression scale scores from 12 aphasic patients with depression, 8 aphasic patients without depression, and 12 healthy controls during resting state and an emotional Stroop task. Spectral and microstate analyses were conducted to examine brain activity patterns across conditions. Results showed that depression scores significantly negatively explained the occurrence of microstate class C and positively explained the transition probability from microstate class A to B. Furthermore, aphasic patients with depression exhibited increased alpha-band activation in the frontal region. These findings suggest distinct neural features in aphasic patients with depression and offer new insights into the mechanisms contributing to their heightened vulnerability to depression.
This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.