Focus on the inconsistency of the shape, location and size of brain glioma, a dual-channel 3-dimensional (3D) densely connected network is proposed to automatically segment brain glioma tumor on magnetic resonance images. Our method is based on a 3D convolutional neural network frame, and two convolution kernel sizes are adopted in each channel to extract multi-scale features in different scales of receptive fields. Then we construct two densely connected blocks in each pathway for feature learning and transmission. Finally, the concatenation of two pathway features was sent to classification layer to classify central region voxels to segment brain tumor automatically. We train and test our model on open brain tumor segmentation challenge dataset, and we also compared our results with other models. Experimental results show that our algorithm can segment different tumor lesions more accurately. It has important application value in the clinical diagnosis and treatment of brain tumor diseases.
With the rapid improvement of the perception and computing capacity of mobile devices such as smart phones, human activity recognition using mobile devices as the carrier has been a new research hot-spot. The inertial information collected by the acceleration sensor in the smart mobile device is used for human activity recognition. Compared with the common computer vision recognition, it has the following advantages: convenience, low cost, and better reflection of the essence of human motion. Based on the WISDM data set collected by smart phones, the inertial navigation information and the deep learning algorithm-convolutional neural network (CNN) were adopted to build a human activity recognition model in this paper. The K nearest neighbor algorithm (KNN) and the random forest algorithm were compared with the CNN network in the recognition accuracy to evaluate the performance of the CNN network. The classification accuracy of CNN model reached 92.73%, which was much higher than KNN and random forest. Experimental results show that the CNN algorithm model can achieve more accurate human activity recognition and has broad application prospects in predicting and promoting human health.
The segmentation of organs at risk is an important part of radiotherapy. The current method of manual segmentation depends on the knowledge and experience of physicians, which is very time-consuming and difficult to ensure the accuracy, consistency and repeatability. Therefore, a deep convolutional neural network (DCNN) is proposed for the automatic and accurate segmentation of head and neck organs at risk. The data of 496 patients with nasopharyngeal carcinoma were reviewed. Among them, 376 cases were randomly selected for training set, 60 cases for validation set and 60 cases for test set. Using the three-dimensional (3D) U-NET DCNN, combined with two loss functions of Dice Loss and Generalized Dice Loss, the automatic segmentation neural network model for the head and neck organs at risk was trained. The evaluation parameters are Dice similarity coefficient and Jaccard distance. The average Dice Similarity coefficient of the 19 organs at risk was 0.91, and the Jaccard distance was 0.15. The results demonstrate that 3D U-NET DCNN combined with Dice Loss function can be better applied to automatic segmentation of head and neck organs at risk.
With the advantage of providing more natural and flexible control manner, brain-computer interface systems based on motor imagery electroencephalogram (EEG) have been widely used in the field of human-machine interaction. However, due to the lower signal-noise ratio and poor spatial resolution of EEG signals, the decoding accuracy is relative low. To solve this problem, a novel convolutional neural network based on temporal-spatial feature learning (TSCNN) was proposed for motor imagery EEG decoding. Firstly, for the EEG signals preprocessed by band-pass filtering, a temporal-wise convolution layer and a spatial-wise convolution layer were respectively designed, and temporal-spatial features of motor imagery EEG were constructed. Then, 2-layer two-dimensional convolutional structures were adopted to learn abstract features from the raw temporal-spatial features. Finally, the softmax layer combined with the fully connected layer were used to perform decoding task from the extracted abstract features. The experimental results of the proposed method on the open dataset showed that the average decoding accuracy was 80.09%, which is approximately 13.75% and 10.99% higher than that of the state-of-the-art common spatial pattern (CSP) + support vector machine (SVM) and filter bank CSP (FBCSP) + SVM recognition methods, respectively. This demonstrates that the proposed method can significantly improve the reliability of motor imagery EEG decoding.
Objective To systematically evaluate the diagnostic value of artificial intelligence assisted narrow-band imaging endoscopy diagnostic system for colorectal adenomatous polyps. Methods Pubmed, Embase, Web of Science, Cochrane Library, SinoMed, China National Knowledge Infrastructure, Chongqing VIP and Wanfang databases were searched. The diagnostic trials of the artificial intelligence assisted narrow-band imaging endoscopy diagnostic system for colorectal adenomatous polyps were comprehensively searched. The search time limit was from January 1, 2000 to October 31, 2022. The included studies were evaluated according to the Quality Assessment of Diagnostic Accuracy Studies-2, and the data were meta-analysed with RevMan 5.3, Meta-Disc 1.4 and Stata 13.0 statistical softwares. Results Finally, 11 articles were included, including 2178 patients. Meta-analysis results of the artificial intelligence assisted narrow-band imaging endoscopy diagnostic system for colorectal adenomatous polyps showed that the pooled sensitivity was 0.91, the pooled specificity was 0.88, the pooled positive likelihood ratio was 7.41, the pooled negative likelihood ratio was 0.10, the pooled diagnostic odds ratio was 76.45, and the area under the summary receiver operating characteristic curve was 0.957. Among them, 5 articles reported the diagnosis of small adenomatous polyps (diameter <5 mm) by the artificial intelligence assisted narrow-band imaging endoscopy diagnostic system. The results showed that the pooled sensitivity and the pooled specificity were 0.93 and 0.91, respectively, and the area under the summary receiver operating characteristic curve was 0.971. Five articles reported the accuracy of endoscopic diagnosis for adenomatous polyps of those with insufficient experience. The results showed that the pooled sensitivity and the pooled specificity were 0.84 and 0.76, respectively. The area under the summary receiver operating characteristic curve was 0.848. Compared with the artificial intelligence assisted narrow-band imaging endoscopy diagnostic system, the difference was statistically significant (Z=1.979, P=0.048). Conclusion The artificial intelligence assisted narrow-band imaging endoscopy diagnostic system has a high diagnostic accuracy, which can significantly improve the diagnostic accuracy for colorectal adenomatous polyps of those with insufficient endoscopic experience, and can effectively compensate for the adverse impact of their lack of endoscopic experience.
The convolutional neural network (CNN) could be used on computer-aided diagnosis of lung tumor with positron emission tomography (PET)/computed tomography (CT), which can provide accurate quantitative analysis to compensate for visual inertia and defects in gray-scale sensitivity, and help doctors diagnose accurately. Firstly, parameter migration method is used to build three CNNs (CT-CNN, PET-CNN, and PET/CT-CNN) for lung tumor recognition in CT, PET, and PET/CT image, respectively. Then, we aimed at CT-CNN to obtain the appropriate model parameters for CNN training through analysis the influence of model parameters such as epochs, batchsize and image scale on recognition rate and training time. Finally, three single CNNs are used to construct ensemble CNN, and then lung tumor PET/CT recognition was completed through relative majority vote method and the performance between ensemble CNN and single CNN was compared. The experiment results show that the ensemble CNN is better than single CNN on computer-aided diagnosis of lung tumor.
Coronavirus disease 2019 (COVID-19) has spread rapidly around the world. In order to diagnose COVID-19 more quickly, in this paper, a depthwise separable DenseNet was proposed. The paper constructed a deep learning model with 2 905 chest X-ray images as experimental dataset. In order to enhance the contrast, the contrast limited adaptive histogram equalization (CLAHE) algorithm was used to preprocess the X-ray image before network training, then the images were put into the training network and the parameters of the network were adjusted to the optimal. Meanwhile, Leaky ReLU was selected as the activation function. VGG16, ResNet18, ResNet34, DenseNet121 and SDenseNet models were used to compare with the model proposed in this paper. Compared with ResNet34, the proposed classification model of pneumonia had improved 2.0%, 2.3% and 1.5% in accuracy, sensitivity and specificity respectively. Compared with the SDenseNet network without depthwise separable convolution, number of parameters of the proposed model was reduced by 43.9%, but the classification effect did not decrease. It can be found that the proposed DWSDenseNet has a good classification effect on the COVID-19 chest X-ray images dataset. Under the condition of ensuring the accuracy as much as possible, the depthwise separable convolution can effectively reduce number of parameters of the model.
When applying deep learning to the automatic segmentation of organs at risk in medical images, we combine two network models of Dense Net and V-Net to develop a Dense V-network for automatic segmentation of three-dimensional computed tomography (CT) images, in order to solve the problems of degradation and gradient disappearance of three-dimensional convolutional neural networks optimization as training samples are insufficient. This algorithm is applied to the delineation of pelvic endangered organs and we take three representative evaluation parameters to quantitatively evaluate the segmentation effect. The clinical result showed that the Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 (average was 0.9); Jaccard distance of these were within 2.3 (average was 0.18). Except for the small intestine, the Hausdorff distance of other organs were less than 0.9 cm (average was 0.62 cm). The Dense V-Network has been proven to achieve the accurate segmentation of pelvic endangered organs.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
Sleep apnea (SA) detection method based on traditional machine learning needs a lot of efforts in feature engineering and classifier design. We constructed a one-dimensional convolutional neural network (CNN) model, which consists in four convolution layers, four pooling layers, two full connection layers and one classification layer. The automatic feature extraction and classification were realized by the structure of the proposed CNN model. The model was verified by the whole night single-channel sleep electrocardiogram (ECG) signals of 70 subjects from the Apnea-ECG dataset. Our results showed that the accuracy of per-segment SA detection was ranged from 80.1% to 88.0%, using the input signals of single-channel ECG signal, RR interval (RRI) sequence, R peak sequence and RRI sequence + R peak sequence respectively. These results indicated that the proposed CNN model was effective and can automatically extract and classify features from the original single-channel ECG signal or its derived signal RRI and R peak sequence. When the input signals were RRI sequence + R peak sequence, the CNN model achieved the best performance. The accuracy, sensitivity and specificity of per-segment SA detection were 88.0%, 85.1% and 89.9%, respectively. And the accuracy of per-recording SA diagnosis was 100%. These findings indicated that the proposed method can effectively improve the accuracy and robustness of SA detection and outperform the methods reported in recent years. The proposed CNN model can be applied to portable screening diagnosis equipment for SA with remote server.