Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become availa...Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.展开更多
Understanding people's emotions through natural language is a challenging task for intelligent systems based on Internet of Things(Io T). The major difficulty is caused by the lack of basic knowledge in emotion ex...Understanding people's emotions through natural language is a challenging task for intelligent systems based on Internet of Things(Io T). The major difficulty is caused by the lack of basic knowledge in emotion expressions with respect to a variety of real world contexts. In this paper, we propose a Bayesian inference method to explore the latent semantic dimensions as contextual information in natural language and to learn the knowledge of emotion expressions based on these semantic dimensions. Our method synchronously infers the latent semantic dimensions as topics in words and predicts the emotion labels in both word-level and document-level texts. The Bayesian inference results enable us to visualize the connection between words and emotions with respect to different semantic dimensions. And by further incorporating a corpus-level hierarchy in the document emotion distribution assumption, we could balance the document emotion recognition results and achieve even better word and document emotion predictions. Our experiment of the wordlevel and the document-level emotion predictions, based on a well-developed Chinese emotion corpus Ren-CECps, renders both higher accuracy and better robustness in the word-level and the document-level emotion predictions compared to the state-of-theart emotion prediction algorithms.展开更多
目前的脑电(EEG)情感识别模型忽略了不同时段情感状态的差异性,未能强化关键的情感信息。针对上述问题,提出一种多上下文向量优化的卷积递归神经网络(CR-MCV)。首先构造脑电信号的特征矩阵序列,通过卷积神经网络(CNN)学习多通道脑电的...目前的脑电(EEG)情感识别模型忽略了不同时段情感状态的差异性,未能强化关键的情感信息。针对上述问题,提出一种多上下文向量优化的卷积递归神经网络(CR-MCV)。首先构造脑电信号的特征矩阵序列,通过卷积神经网络(CNN)学习多通道脑电的空间特征;然后利用基于多头注意力的递归神经网络生成多上下文向量进行高层抽象特征提取;最后利用全连接层进行情感分类。在DEAP(Database for Emotion Analysis using Physiological signals)数据集上进行实验,CR-MCV在唤醒和效价维度上分类准确率分别为88.09%和89.30%。实验结果表明,CR-MCV在利用电极空间位置信息和不同时段情感状态显著性特征基础上,能够自适应地分配特征的注意力并强化情感状态显著性信息。展开更多
随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。...随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。目前,考虑Emoji信息的情绪识别模型大多采用词嵌入模型学习Emoji表示,得到的Emoji向量缺乏与目标情绪的直接关联,Emoji表示蕴含的情绪识别信息较少。针对上述问题,该文通过软标签为Emoji构建与目标情绪直接关联的情感分布向量,并将Emoji情感分布信息与基于预训练模型的文本语义信息相结合,提出融合Emoji情感分布的多标签情绪识别方法(Emoji Emotion Distribution Information Fusion for Multi-label Emotion Recognition,EIFER)。EIFER方法在经典的二元交叉熵损失函数的基础上,通过引入标签相关感知损失对情绪标签间存在的相关性进行建模,以提升模型的多标签情绪识别性能。EIFER方法的模型结构由语义信息模块、Emoji信息模块和多损失函数预测模块组成,采用端到端的方式对模型进行训练。在SemEval2018英文数据集上的情绪预测对比实验结果表明,该文提出的EIFER方法比已有的情绪识别方法具有更优的性能。展开更多
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金Supported by Grant-in-Aid for Young Scientists(A)(Grant No.26700021)Japan Society for the Promotion of Science and Strategic Information and Communications R&D Promotion Programme(Grant No.142103011)Ministry of Internal Affairs and Communications
文摘Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
基金supported in part by the National Natural Science Foundation of China(NSFC)Key Program(61573094)Fundamental Research Funds for the Central Universities(N140402001)
文摘Understanding people's emotions through natural language is a challenging task for intelligent systems based on Internet of Things(Io T). The major difficulty is caused by the lack of basic knowledge in emotion expressions with respect to a variety of real world contexts. In this paper, we propose a Bayesian inference method to explore the latent semantic dimensions as contextual information in natural language and to learn the knowledge of emotion expressions based on these semantic dimensions. Our method synchronously infers the latent semantic dimensions as topics in words and predicts the emotion labels in both word-level and document-level texts. The Bayesian inference results enable us to visualize the connection between words and emotions with respect to different semantic dimensions. And by further incorporating a corpus-level hierarchy in the document emotion distribution assumption, we could balance the document emotion recognition results and achieve even better word and document emotion predictions. Our experiment of the wordlevel and the document-level emotion predictions, based on a well-developed Chinese emotion corpus Ren-CECps, renders both higher accuracy and better robustness in the word-level and the document-level emotion predictions compared to the state-of-theart emotion prediction algorithms.
文摘目前的脑电(EEG)情感识别模型忽略了不同时段情感状态的差异性,未能强化关键的情感信息。针对上述问题,提出一种多上下文向量优化的卷积递归神经网络(CR-MCV)。首先构造脑电信号的特征矩阵序列,通过卷积神经网络(CNN)学习多通道脑电的空间特征;然后利用基于多头注意力的递归神经网络生成多上下文向量进行高层抽象特征提取;最后利用全连接层进行情感分类。在DEAP(Database for Emotion Analysis using Physiological signals)数据集上进行实验,CR-MCV在唤醒和效价维度上分类准确率分别为88.09%和89.30%。实验结果表明,CR-MCV在利用电极空间位置信息和不同时段情感状态显著性特征基础上,能够自适应地分配特征的注意力并强化情感状态显著性信息。
文摘随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。目前,考虑Emoji信息的情绪识别模型大多采用词嵌入模型学习Emoji表示,得到的Emoji向量缺乏与目标情绪的直接关联,Emoji表示蕴含的情绪识别信息较少。针对上述问题,该文通过软标签为Emoji构建与目标情绪直接关联的情感分布向量,并将Emoji情感分布信息与基于预训练模型的文本语义信息相结合,提出融合Emoji情感分布的多标签情绪识别方法(Emoji Emotion Distribution Information Fusion for Multi-label Emotion Recognition,EIFER)。EIFER方法在经典的二元交叉熵损失函数的基础上,通过引入标签相关感知损失对情绪标签间存在的相关性进行建模,以提升模型的多标签情绪识别性能。EIFER方法的模型结构由语义信息模块、Emoji信息模块和多损失函数预测模块组成,采用端到端的方式对模型进行训练。在SemEval2018英文数据集上的情绪预测对比实验结果表明,该文提出的EIFER方法比已有的情绪识别方法具有更优的性能。