Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their...Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.展开更多
多模态的对话情绪识别(Emotion Recognition in Conversation,ERC)是构建情感对话系统的关键。近年来,基于图的融合方法在会话中动态聚合多模态上下文特征,提高了模型在多模态对话情绪识别方面的性能。然而,这些方法都没有充分保留和利...多模态的对话情绪识别(Emotion Recognition in Conversation,ERC)是构建情感对话系统的关键。近年来,基于图的融合方法在会话中动态聚合多模态上下文特征,提高了模型在多模态对话情绪识别方面的性能。然而,这些方法都没有充分保留和利用输入数据中的有价值的信息。具体地说,它们都没有保留从输入到融合结果的任务相关信息,并且忽略了标签本身蕴含的信息。为了解决上述问题,该文提出了一种基于互信息最大化和对比损失的多模态对话情绪识别模型(Multimodal ERC with Mutual Information Maximization and Contrastive Loss,MMIC)。模型通过在输入级和融合级上分级最大化模态之间的互信息(Mutual Information),使任务相关信息在融合过程中得以保存,从而生成更丰富的多模态表示。该文还在基于图的动态融合网络中引入了监督对比学习(Supervised Contrastive Learning),通过充分利用标签蕴含的信息,使不同情绪相互排斥,增强了模型识别相似情绪的能力。在两个英文和一个中文的公共数据集上的大量实验证明了该文所提出模型的有效性和优越性。此外,在所提出模型上进行的案例探究有效地证实了模型可以有效保留任务相关信息,更好地区分出相似的情绪。消融实验和可视化结果证明了模型中每个模块的有效性。展开更多
Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from th...Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.展开更多
文摘Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations.
文摘多模态的对话情绪识别(Emotion Recognition in Conversation,ERC)是构建情感对话系统的关键。近年来,基于图的融合方法在会话中动态聚合多模态上下文特征,提高了模型在多模态对话情绪识别方面的性能。然而,这些方法都没有充分保留和利用输入数据中的有价值的信息。具体地说,它们都没有保留从输入到融合结果的任务相关信息,并且忽略了标签本身蕴含的信息。为了解决上述问题,该文提出了一种基于互信息最大化和对比损失的多模态对话情绪识别模型(Multimodal ERC with Mutual Information Maximization and Contrastive Loss,MMIC)。模型通过在输入级和融合级上分级最大化模态之间的互信息(Mutual Information),使任务相关信息在融合过程中得以保存,从而生成更丰富的多模态表示。该文还在基于图的动态融合网络中引入了监督对比学习(Supervised Contrastive Learning),通过充分利用标签蕴含的信息,使不同情绪相互排斥,增强了模型识别相似情绪的能力。在两个英文和一个中文的公共数据集上的大量实验证明了该文所提出模型的有效性和优越性。此外,在所提出模型上进行的案例探究有效地证实了模型可以有效保留任务相关信息,更好地区分出相似的情绪。消融实验和可视化结果证明了模型中每个模块的有效性。
文摘Artificial entities,such as virtual agents,have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry,a synchronized co-movement between dyadic pairs.However,the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions,such as the facial expressive intensity and facial muscle movement,were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.