期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Speech Emotion Recognition Using Cascaded Attention Network with Joint Loss for Discrimination of Confusions
1
作者 Yang Liu Haoqin Sun +2 位作者 Wenbo Guan Yuqi Xia Zhen Zhao 《Machine Intelligence Research》 EI CSCD 2023年第4期595-604,共10页
Due to the complexity of emotional expression, recognizing emotions from the speech is a critical and challenging task. In most of the studies, some specific emotions are easily classified incorrectly. In this paper, ... Due to the complexity of emotional expression, recognizing emotions from the speech is a critical and challenging task. In most of the studies, some specific emotions are easily classified incorrectly. In this paper, we propose a new framework that integrates cascade attention mechanism and joint loss for speech emotion recognition (SER), aiming to solve feature confusions for emotions that are difficult to be classified correctly. First, we extract the mel frequency cepstrum coefficients (MFCCs), deltas, and delta-deltas from MFCCs to form 3-dimensional (3D) features, thus effectively reducing the interference of external factors. Second, we employ spatiotemporal attention to selectively discover target emotion regions from the input features, where self-attention with head fusion captures the long-range dependency of temporal features. Finally, the joint loss function is employed to distinguish emotional embeddings with high similarity to enhance the overall performance. Experiments on interactive emotional dyadic motion capture (IEMOCAP) database indicate that the method achieves a positive improvement of 2.49% and 1.13% in weighted accuracy (WA) and unweighted accuracy (UA), respectively, compared to the state-of-the-art strategies. 展开更多
关键词 Speech emotion recognition(SER) 3-dimensional(3D)feature cascaded attention network(CAN) triplet loss joint loss
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部