west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Deep learning" 79 results
  • Oral panorama reconstruction method based on pre-segmentation and Bezier function

    For patients with partial jaw defects, cysts and dental implants, doctors need to take panoramic X-ray films or manually draw dental arch lines to generate Panorama images in order to observe their complete dentition information during oral diagnosis. In order to solve the problems of additional burden for patients to take panoramic X-ray films and time-consuming issue for doctors to manually segment dental arch lines, this paper proposes an automatic panorama reconstruction method based on cone beam computerized tomography (CBCT). The V-network (VNet) is used to pre-segment the teeth and the background to generate the corresponding binary image, and then the Bezier curve is used to define the best dental arch curve to generate the oral panorama. In addition, this research also addressed the issues of mistakenly recognizing the teeth and jaws as dental arches, incomplete coverage of the dental arch area by the generated dental arch lines, and low robustness, providing intelligent methods for dental diagnosis and improve the work efficiency of doctors.

    Release date:2023-10-20 04:48 Export PDF Favorites Scan
  • A method for emotion transition recognition using cross-modal feature fusion and global perception

    Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.

    Release date:2025-10-21 03:48 Export PDF Favorites Scan
  • Advances in the diagnosis of prostate cancer based on image fusion

    Image fusion currently plays an important role in the diagnosis of prostate cancer (PCa). Selecting and developing a good image fusion algorithm is the core task of achieving image fusion, which determines whether the fusion image obtained is of good quality and can meet the actual needs of clinical application. In recent years, it has become one of the research hotspots of medical image fusion. In order to make a comprehensive study on the methods of medical image fusion, this paper reviewed the relevant literature published at home and abroad in recent years. Image fusion technologies were classified, and image fusion algorithms were divided into traditional fusion algorithms and deep learning (DL) fusion algorithms. The principles and workflow of some algorithms were analyzed and compared, their advantages and disadvantages were summarized, and relevant medical image data sets were introduced. Finally, the future development trend of medical image fusion algorithm was prospected, and the development direction of medical image fusion technology for the diagnosis of prostate cancer and other major diseases was pointed out.

    Release date: Export PDF Favorites Scan
  • Research progress in electroencephalogram-based brain age prediction

    Brain age prediction, as a significant approach for assessing brain health and early diagnosing neurodegenerative diseases, has garnered widespread attention in recent years. Electroencephalogram (EEG), an non-invasive, convenient, and cost-effective neurophysiological signal, offers unique advantages for brain age prediction due to its high temporal resolution and strong correlation with brain functional states. Despite substantial progress in enhancing prediction accuracy and generalizability, challenges remain in data quality and model interpretability. This review comprehensively examined the advancements in EEG-based brain age prediction, detailing key aspects of data preprocessing, feature extraction, model construction, and result evaluation. It also summarized the current applications of machine learning and deep learning methods in this field, analyzed existing issues, and explored future directions to promote the widespread application of EEG-based brain age prediction in both clinical and research settings.

    Release date:2025-08-19 11:47 Export PDF Favorites Scan
  • Study on automatic and rapid diagnosis of distal radius fracture by X-ray

    This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.

    Release date:2024-10-22 02:33 Export PDF Favorites Scan
  • Detection of neurofibroma combining radiomics and ensemble learning

    This study proposes an automated neurofibroma detection method for whole-body magnetic resonance imaging (WBMRI) based on radiomics and ensemble learning. A dynamic weighted box fusion mechanism integrating two dimensional (2D) object detection and three dimensional (3D) segmentation is developed, where the fusion weights are dynamically adjusted according to the respective performance of the models in different tasks. The 3D segmentation model leverages spatial structural information to effectively compensate for the limited boundary perception capability of 2D methods. In addition, a radiomics-based false positive reduction strategy is introduced to improve the robustness of the detection system. The proposed method is evaluated on 158 clinical WBMRI cases with a total of 1,380 annotated tumor samples, using five-fold cross-validation. Experimental results show that, compared with the best-performing single model, the proposed approach achieves notable improvements in average precision, sensitivity, and overall performance metrics, while reducing the average number of false positives by 17.68. These findings demonstrate that the proposed method achieves high detection accuracy with enhanced false positive suppression and strong generalization potential.

    Release date:2025-12-22 10:16 Export PDF Favorites Scan
  • Endometrial cancer lesion region segmentation based on large kernel convolution and combined attention

    Endometrial cancer (EC) is one of the most common gynecological malignancies, with an increasing incidence rate worldwide. Accurate segmentation of lesion areas in computed tomography (CT) images is a critical step in assisting clinical diagnosis. In this study, we propose a novel deep learning-based segmentation model, termed spatial choice and weight union network (SCWU-Net), which incorporates two newly designed modules: the spatial selection module (SSM) and the combination weight module (CWM). The SSM enhances the model’s ability to capture contextual information through deep convolutional blocks, while the CWM, based on joint attention mechanisms, is employed within the skip connections to further boost segmentation performance. By integrating the strengths of both modules into a U-shaped multi-scale architecture, the model achieves precise segmentation of EC lesion regions. Experimental results on a public dataset demonstrate that SCWU-Net achieves a Dice similarity coefficient (DSC) of 82.98%, an intersection over union (IoU) of 78.63%, a precision of 92.36%, and a recall of 84.10%. Its overall performance is significantly outperforming other state-of-the-art models. This study enhances the accuracy of lesion segmentation in EC CT images and holds potential clinical value for the auxiliary diagnosis of endometrial cancer.

    Release date:2025-10-21 03:48 Export PDF Favorites Scan
  • A multi-scale feature capturing and spatial position attention model for colorectal polyp image segmentation

    Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.

    Release date:2025-10-21 03:48 Export PDF Favorites Scan
  • Application of a parallel branches network based on Transformer for skin melanoma segmentation

    Cutaneous malignant melanoma is a common malignant tumor. Accurate segmentation of the lesion area is extremely important for early diagnosis of the disease. In order to achieve more effective and accurate segmentation of skin lesions, a parallel network architecture based on Transformer is proposed in this paper. This network is composed of two parallel branches: the former is the newly constructed multiple residual frequency channel attention network (MFC), and the latter is the visual transformer network (ViT). First, in the MFC network branch, the multiple residual module and the frequency channel attention module (FCA) module are fused to improve the robustness of the network and enhance the capability of extracting image detailed features. Second, in the ViT network branch, multiple head self-attention (MSA) in Transformer is used to preserve the global features of the image. Finally, the feature information extracted from the two branches are combined in parallel to realize image segmentation more effectively. To verify the proposed algorithm, we conducted experiments on the dermoscopy image dataset published by the International Skin Imaging Collaboration (ISIC) in 2018. The results show that the intersection-over-union (IoU) and Dice coefficients of the proposed algorithm achieve 90.15% and 94.82%, respectively, which are better than the latest skin melanoma segmentation networks. Therefore, the proposed network can better segment the lesion area and provide dermatologists with more accurate lesion data.

    Release date:2022-12-28 01:34 Export PDF Favorites Scan
  • Advances in heart failure clinical research based on deep learning

    Heart failure is a disease that seriously threatens human health and has become a global public health problem. Diagnostic and prognostic analysis of heart failure based on medical imaging and clinical data can reveal the progression of heart failure and reduce the risk of death of patients, which has important research value. The traditional analysis methods based on statistics and machine learning have some problems, such as insufficient model capability, poor accuracy due to prior dependence, and poor model adaptability. In recent years, with the development of artificial intelligence technology, deep learning has been gradually applied to clinical data analysis in the field of heart failure, showing a new perspective. This paper reviews the main progress, application methods and major achievements of deep learning in heart failure diagnosis, heart failure mortality and heart failure readmission, summarizes the existing problems and presents the prospects of related research to promote the clinical application of deep learning in heart failure clinical research.

    Release date:2023-06-25 02:49 Export PDF Favorites Scan
8 pages Previous 1 2 3 ... 8 Next

Format

Content