west china medical publishers
Author
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Author "SHEN Yuchen" 2 results
  • Audiovisual emotion recognition based on a multi-head cross attention mechanism

    In audiovisual emotion recognition, representational learning is a research direction receiving considerable attention, and the key lies in constructing effective affective representations with both consistency and variability. However, there are still many challenges to accurately realize affective representations. For this reason, in this paper we proposed a cross-modal audiovisual recognition model based on a multi-head cross-attention mechanism. The model achieved fused feature and modality alignment through a multi-head cross-attention architecture, and adopted a segmented training strategy to cope with the modality missing problem. In addition, a unimodal auxiliary loss task was designed and shared parameters were used in order to preserve the independent information of each modality. Ultimately, the model achieved macro and micro F1 scores of 84.5% and 88.2%, respectively, on the crowdsourced annotated multimodal emotion dataset of actor performances (CREMA-D). The model in this paper can effectively capture intra- and inter-modal feature representations of audio and video modalities, and successfully solves the unity problem of the unimodal and multimodal emotion recognition frameworks, which provides a brand-new solution to the audiovisual emotion recognition.

    Release date: Export PDF Favorites Scan
  • Medical text classification model integrating medical entity label semantics

    Automatic classification of medical questions is of great significance in improving the quality and efficiency of online medical services, and belongs to the task of intent recognition. Joint entity recognition and intent recognition perform better than single task models. Currently, most publicly available medical text intent recognition datasets lack entity annotation, and manual annotation of these entities requires a lot of time and manpower. To solve this problem, this paper proposes a medical text classification model, bidirectional encoder representation based on transformer-recurrent convolutional neural network-entity-label-semantics (BRELS), which integrates medical entity label semantics. This model firstly utilizes an adaptive fusion mechanism to absorb prior knowledge of medical entity labels, achieving local feature enhancement. Then in global feature extraction, a lightweight recurrent convolutional neural network (LRCNN) is used to suppress parameter growth while preserving the original semantics of the text. The ablation and comparison experiments are conducted on three public medical text intent recognition datasets to validate the performance of the model. The results show that F1 score reaches 87.34%, 81.71%, and 77.74% on each dataset, respectively. The results in this paper show that the BRELS model can effectively identify and understand medical terminology, thereby effectively identifying users’ intentions, which can improve the quality and efficiency of online medical services.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content