Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative disease. Neuroimaging based on magnetic resonance imaging (MRI) is one of the most intuitive and reliable methods to perform AD screening and diagnosis. Clinical head MRI detection generates multimodal image data, and to solve the problem of multimodal MRI processing and information fusion, this paper proposes a structural and functional MRI feature extraction and fusion method based on generalized convolutional neural networks (gCNN). The method includes a three-dimensional residual U-shaped network based on hybrid attention mechanism (3D HA-ResUNet) for feature representation and classification for structural MRI, and a U-shaped graph convolutional neural network (U-GCN) for node feature representation and classification of brain functional networks for functional MRI. Based on the fusion of the two types of image features, the optimal feature subset is selected based on discrete binary particle swarm optimization, and the prediction results are output by a machine learning classifier. The validation results of multimodal dataset from the AD Neuroimaging Initiative (ADNI) open-source database show that the proposed models have superior performance in their respective data domains. The gCNN framework combines the advantages of these two models and further improves the performance of the methods using single-modal MRI, improving the classification accuracy and sensitivity by 5.56% and 11.11%, respectively. In conclusion, the gCNN-based multimodal MRI classification method proposed in this paper can provide a technical basis for the auxiliary diagnosis of Alzheimer’s disease.
Identification of molecular subtypes of malignant tumors plays a vital role in individualized diagnosis, personalized treatment, and prognosis prediction of cancer patients. The continuous improvement of comprehensive tumor genomics database and the ongoing breakthroughs in deep learning technology have driven further advancements in computer-aided tumor classification. Although the existing classification methods based on gene expression omnibus database take the complexity of cancer molecular classification into account, they ignore the internal correlation and synergism of genes. To solve this problem, we propose a multi-layer graph convolutional network model for breast cancer subtype classification combined with hierarchical attention network. This model constructs the graph embedding datasets of patients’ genes, and develops a new end-to-end multi-classification model, which can effectively recognize molecular subtypes of breast cancer. A large number of test data prove the good performance of this new model in the classification of breast cancer subtypes. Compared to the original graph convolutional neural networks and two mainstream graph neural network classification algorithms, the new model has remarkable advantages. The accuracy, weight-F1-score, weight-recall, and weight-precision of our model in seven-category classification has reached 0.851 7, 0.823 5, 0.851 7 and 0.793 6 respectively. In the four-category classification, the results are 0.928 5, 0.894 9, 0.928 5 and 0.865 0 respectively. In addition, compared with the latest breast cancer subtype classification algorithms, the method proposed in this paper also achieved the highest classification accuracy. In summary, the model proposed in this paper may serve as an auxiliary diagnostic technology, providing a reliable option for precise classification of breast cancer subtypes in the future and laying the theoretical foundation for computer-aided tumor classification.
In order to meet the need of autonomous control of patients with severe limb disorders, this paper designs a nursing bed control system based on motor imagery-brain computer interface (MI-BCI). In view of the low decoding performance of cross-subjects and the dynamic fluctuation of cognitive state in the existing MI-BCI technology, the neural network structure optimization and user interaction feedback enhancement are improved. Firstly, the optimized dual-branch graph convolution multi-scale neural network integrates dynamic graph convolution and multi-scale convolution. The average classification accuracy is higher than that of multi-scale attention temporal convolution network, Gram angle field combined with convolution long short term memory hybrid network, Transformer-based graph convolution network and other existing methods. Secondly, a dual visual feedback mechanism is constructed, in which electroencephalogram (EEG) topographic map feedback can improve the discrimination of spatial patterns, and attention state feedback can enhance the temporal stability of signals. Compared with the single EEG topographic map feedback and non-feedback system, the average classification accuracy of the proposed method is also greatly improved. Finally, in the four classification control task of nursing bed, the average control accuracy of the system is 90.84%, and the information transmission rate is 84.78 bits/min. In summary, this paper provides a reliable technical solution for improving the autonomous interaction ability of patients with severe limb disorders, which has important theoretical significance and application value.