期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Multimodal emotion recognition in the metaverse era:New needs and transformation in mental health work
1
作者 Yan Zeng Jun-Wen Zhang Jian Yang 《World Journal of Clinical Cases》 SCIE 2024年第34期6674-6678,共5页
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ... This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition. 展开更多
关键词 Multimodal emotion recognition Metaverse Needs TRANSFORMATION Mental health
下载PDF
LMR-CBT: learning modality-fused representations with CB-Transformer for multimodal emotion recognition from unaligned multimodal sequences 被引量:1
2
作者 Ziwang FU Feng LIU +2 位作者 Qing XU Xiangling FU Jiayin QI 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期39-47,共9页
Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.Existing approaches use directional pairwise attention or a messag... Learning modality-fused representations and processing unaligned multimodal sequences are meaningful and challenging in multimodal emotion recognition.Existing approaches use directional pairwise attention or a message hub to fuse language,visual,and audio modalities.However,these fusion methods are often quadratic in complexity with respect to the modal sequence length,bring redundant information and are not efficient.In this paper,we propose an efficient neural network to learn modality-fused representations with CB-Transformer(LMR-CBT)for multimodal emotion recognition from unaligned multi-modal sequences.Specifically,we first perform feature extraction for the three modalities respectively to obtain the local structure of the sequences.Then,we design an innovative asymmetric transformer with cross-modal blocks(CB-Transformer)that enables complementary learning of different modalities,mainly divided into local temporal learning,cross-modal feature fusion and global self-attention representations.In addition,we splice the fused features with the original features to classify the emotions of the sequences.Finally,we conduct word-aligned and unaligned experiments on three challenging datasets,IEMOCAP,CMU-MOSI,and CMU-MOSEI.The experimental results show the superiority and efficiency of our proposed method in both settings.Compared with the mainstream methods,our approach reaches the state-of-the-art with a minimum number of parameters. 展开更多
关键词 modality-fused representations cross-model blocks multimodal emotion recognition unaligned multimodal sequences computational affection
原文传递
Multimode Biometric Recognition System Praised by MOE
3
《Tsinghua Science and Technology》 SCIE EI CAS 2005年第5期592-592,共1页
A Tsinghua-developed biometric recognition system, designed to bolster traditional public security identification measures, was highly commended in an appraisal by the Ministry of Education on June 22, 2005.
关键词 multimode Biometric recognition System Praised by MOE ID
原文传递
TACFN:Transformer-Based Adaptive Cross-Modal Fusion Network for Multimodal Emotion Recognition
4
作者 Feng Liu Ziwang Fu +1 位作者 Yunlong Wang Qijian Zheng 《CAAI Artificial Intelligence Research》 2023年第1期75-82,共8页
The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suf... The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN. 展开更多
关键词 multimodal emotion recognition multimodal fusion adaptive cross-modal blocks TRANSFORMER computational perception
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部