Electroencephalogram (EEG) classification for brain-computer interface (BCI) is a new way of realizing human-computer interreaction. In this paper the application of semi-supervised sparse representation classifier algorithms based on help training to EEG classification for BCI is reported. Firstly, the correlation information of the unlabeled data is obtained by sparse representation classifier and some data with high correlation selected. Secondly, the boundary information of the selected data is produced by discriminative classifier, which is the Fisher linear classifier. The final unlabeled data with high confidence are selected by a criterion containing the information of distance and direction. We applied this novel method to the three benchmark datasets, which were BCIⅠ, BCIⅡ_Ⅳ and USPS. The classification rate were 97%,82% and 84.7%, respectively. Moreover the fastest arithmetic rate was just about 0.2 s. The classification rate and efficiency results of the novel method are both better than those of S3VM and SVM, proving that the proposed method is effective.
Objective To propose an innovative self-supervised learning method for vascular segmentation in computed tomography angiography (CTA) images by integrating feature reconstruction with masked autoencoding. Methods A 3D masked autoencoder-based framework was developed, where in 3D histogram of oriented gradients (HOG) was utilized for multi-scale vascular feature extraction. During pre-training, random masking was applied to local patches of CTA images, and the model was trained to jointly reconstruct original voxels and HOG features of masked regions. The pre-trained model was further fine-tuned on two annotated datasets for clinical-level vessel segmentation. Results Evaluated on two independent datasets (30 labeled CTA images each), our method achieved superior segmentation accuracy to the supervised neural network U-Net (nnU-Net) baseline, with Dice similarity coefficients of 91.2% vs. 89.7% (aorta) and 84.8% vs. 83.2% (coronary arteries). Conclusion The proposed self-supervised model significantly reduces manual annotation costs without compromising segmentation precision, showing substantial potential for enhancing clinical workflows in vascular disease management.
The application of minimally invasive surgical tool detection and tracking technology based on deep learning in minimally invasive surgery is currently a research hotspot. This paper firstly expounds the relevant technical content of the minimally invasive surgery tool detection and tracking, which mainly introduces the advantages based on deep learning algorithm. Then, this paper summarizes the algorithm for detection and tracking surgical tools based on fully supervised deep neural network and the emerging algorithm for detection and tracking surgical tools based on weakly supervised deep neural network. Several typical algorithm frameworks and their flow charts based on deep convolutional and recurrent neural networks are summarized emphatically, so as to enable researchers in relevant fields to understand the current research progress more systematically and provide reference for minimally invasive surgeons to select navigation technology. In the end, this paper provides a general direction for the further research of minimally invasive surgical tool detection and tracking technology based on deep learning.
Hospital accreditation involves a wide range of aspects and has a significant impact, receiving widespread attention from multiple parties and is a topic worthy of in-depth research. This article provides a review of research on hospital accreditation both domestically and internationally, focusing on key issues such as whether accreditation can promote the improvement of medical quality and whether third-party evaluation should be introduced. The aim is to reveal the shortcomings of research on hospital accreditation in China, provide direction for subsequent research on hospital accreditation in China, and provide a reference for improving the hospital accreditation system in China.
In order to solve the problem of early classification of Alzheimer’s disease (AD), the conventional linear feature extraction algorithm is difficult to extract the most discriminative information from the high-dimensional features to effectively classify unlabeled samples. Therefore, in order to reduce the redundant features and improve the recognition accuracy, this paper used the supervised locally linear embedding (SLLE) algorithm to transform multivariate data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions. The 412 individuals were collected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) including stable mild cognitive impairment (sMCI, n = 93), amnestic mild cognitive impairment (aMCI, n = 96), AD (n = 86) and cognitive normal controls (CN, n = 137). The SLLE algorithm used in this paper is to calculate the nearest neighbors of each sample point by adding the distance correction term, and the locally linear reconstruction weight matrix was obtained from its nearest neighbors, then the low dimensional mapping of the high dimensional data can be calculated. In order to verify the validity of SLLE in the task of classification, the feature extraction algorithms such as principal component analysis (PCA), Neighborhood MinMax Projection (NMMP), locally linear mapping (LLE) and SLLE were respectively combined with support vector machines (SVM) classifier to obtain the accuracy of classification of CN and sMCI, CN and aMCI, CN and AD, sMCI and aMCI, sMCI and AD, and aMCI and AD, respectively. Experimental results showed that our method had improvements (accuracy/sensitivity/specificity: 65.16%/63.33%/67.62%) on the classification of sMCI and aMCI by comparing with the combination algorithm of LLE and SVM (accuracy/sensitivity/specificity: 64.08%/66.14%/62.77%) and SVM (accuracy/sensitivity/specificity: 57.25%/56.28%/58.08%). In detail the accuracy of the combination algorithm of SLLE and SVM is 1.08% higher than the combination algorithm of LLE and SVM, and 7.91% higher than SVM. Thus, the combination of SLLE and SVM is more effective in the early diagnosis of Alzheimer’s disease.
Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.
Blood velocity inversion based on magnetoelectric effect is helpful for the development of daily monitoring of vascular stenosis, but the accuracy of blood velocity inversion and imaging resolution still need to be improved. Therefore, a convolutional neural network (CNN) based inversion imaging method for intravascular blood flow velocity was proposed in this paper. Firstly, unsupervised learning CNN is constructed to extract weight matrix representation information to preprocess voltage data. Then the preprocessing results are input to supervised learning CNN, and the blood flow velocity value is output by nonlinear mapping. Finally, angiographic images are obtained. In this paper, the validity of the proposed method is verified by constructing data set. The results show that the correlation coefficients of blood velocity inversion in vessel location and stenosis test are 0.884 4 and 0.972 1, respectively. The above research shows that the proposed method can effectively reduce the information loss during the inversion process and improve the inversion accuracy and imaging resolution, which is expected to assist clinical diagnosis.
Temporomandibular joint disorder (TMD) is a common oral and maxillofacial disease, which is difficult to detect due to its subtle early symptoms. In this study, a TMD intelligent diagnostic system implemented on edge computing devices was proposed, which can achieve rapid detection of TMD in clinical diagnosis and facilitate its early-stage clinical intervention. The proposed system first automatically segments the important components of the temporomandibular joint, followed by quantitative measurement of the joint gap area, and finally predicts the existence of TMD according to the measurements. In terms of segmentation, this study employs semi-supervised learning to achieve the accurate segmentation of temporomandibular joint, with an average Dice coefficient (DC) of 0.846. A 3D region extraction algorithm for the temporomandibular joint gap area is also developed, based on which an automatic TMD diagnosis model is proposed, with an accuracy of 83.87%. In summary, the intelligent TMD diagnosis system developed in this paper can be deployed at edge computing devices within a local area network, which is able to achieve rapid detecting and intelligent diagnosis of TMD with privacy guarantee.
Image registration is of great clinical importance in computer aided diagnosis and surgical planning of liver diseases. Deep learning-based registration methods endow liver computed tomography (CT) image registration with characteristics of real-time and high accuracy. However, existing methods in registering images with large displacement and deformation are faced with the challenge of the texture information variation of the registered image, resulting in subsequent erroneous image processing and clinical diagnosis. To this end, a novel unsupervised registration method based on the texture filtering is proposed in this paper to realize liver CT image registration. Firstly, the texture filtering algorithm based on L0 gradient minimization eliminates the texture information of liver surface in CT images, so that the registration process can only refer to the spatial structure information of two images for registration, thus solving the problem of texture variation. Then, we adopt the cascaded network to register images with large displacement and large deformation, and progressively align the fixed image with the moving one in the spatial structure. In addition, a new registration metric, the histogram correlation coefficient, is proposed to measure the degree of texture variation after registration. Experimental results show that our proposed method achieves high registration accuracy, effectively solves the problem of texture variation in the cascaded network, and improves the registration performance in terms of spatial structure correspondence and anti-folding capability. Therefore, our method helps to improve the performance of medical image registration, and make the registration safely and reliably applied in the computer-aided diagnosis and surgical planning of liver diseases.
O6-carboxymethyl guanine(O6-CMG) is a highly mutagenic alkylation product of DNA that causes gastrointestinal cancer in organisms. Existing studies used mutant Mycobacterium smegmatis porin A (MspA) nanopore assisted by Phi29 DNA polymerase to localize it. Recently, machine learning technology has been widely used in the analysis of nanopore sequencing data. But the machine learning always need a large number of data labels that have brought extra work burden to researchers, which greatly affects its practicability. Accordingly, this paper proposes a nano-Unsupervised-Deep-Learning method (nano-UDL) based on an unsupervised clustering algorithm to identify methylation events in nanopore data automatically. Specially, nano-UDL first uses the deep AutoEncoder to extract features from the nanopore dataset and then applies the MeanShift clustering algorithm to classify data. Besides, nano-UDL can extract the optimal features for clustering by joint optimizing the clustering loss and reconstruction loss. Experimental results demonstrate that nano-UDL has relatively accurate recognition accuracy on the O6-CMG dataset and can accurately identify all sequence segments containing O6-CMG. In order to further verify the robustness of nano-UDL, hyperparameter sensitivity verification and ablation experiments were carried out in this paper. Using machine learning to analyze nanopore data can effectively reduce the additional cost of manual data analysis, which is significant for many biological studies, including genome sequencing.