Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in s...Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model.展开更多
As the form of cyber threats becomes more complex,which leads to a widespread concern about how to promote network security active defense system by using the exploding cyber threat intelligence.Basing on the content ...As the form of cyber threats becomes more complex,which leads to a widespread concern about how to promote network security active defense system by using the exploding cyber threat intelligence.Basing on the content analysis method,introduces the precision,recall rate and timely rate on the basis of the change of time dimension,and analyzes the threat intelligence provider from three aspects.The validity of this method is verified by the test of massive source of threat data,which improves the efficiency of CIF analysis and makes it easy to analyze and extract the threat intelligence information quickly.展开更多
Background Although frustration is a common emotional reaction while playing games,an excessive level of frustration can negatively impact a user's experience,discouraging them from further game interactions.The a...Background Although frustration is a common emotional reaction while playing games,an excessive level of frustration can negatively impact a user's experience,discouraging them from further game interactions.The automatic detection of frustration can enable the development of adaptive systems that can adapt a game to a user's specific needs through real-time difficulty adjustment,thereby optimizing the player's experience and guaranteeing game success.To this end,we present a speech-based approach for the automatic detection of frustration during game interactions,a specific task that remains under explored in research.Method The experiments were performed on the Multimodal Game Frustration Database(MGFD),an audiovisual dataset-collected within the Wizard-of-Oz framework-that is specially tailored to investigate verbal and facial expressions of frustration during game interactions.We explored the performance of a variety of acoustic feature sets,including Mel-Spectrograms,Mel Frequency Cepstral Coefficients(MFCCs),and the low-dimensional knowledge-based acoustic feature set eGeMAPS.Because of the continual improvements in speech recognition tasks achieved by the use of convolutional neural networks(CNNs),unlike the MGFD baseline,which is based on the Long Short Term Memory(LSTM)architecture and Support Vector Machine(SVM)classifier-in the present work,we consider typical CNNs,including ResNet,VGG,and AlexNet.Furthermore,given the unresolved debate on the suitability of shallow and deep networks,we also examine the performance of two of the latest deep CNNs:WideResNet and EfficientNet.Results Our best result,achieved with WideResNet and Mel-Spectrogram features,increases the system performance from 58.8%unweighted average recall(UAR)to 93.1%UAR for speech-based automatic frustration recognition.展开更多
基金the National Natural Science Foundation of China(62071330)the National Science Fund for Distinguished Young Scholars(61425017)+3 种基金the Key Program of the National Natural Science Foundation(61831022)the Key Program of the Natural Science Foundation of Tianjin(18JCZDJC36300)the Open Projects Program of the National Laboratory of Pattern Recognition and the Senior Visiting Scholar Program of Tianjin Normal Universitythe Innovative Medicines Initiative 2 Joint Undertaking(115902),which receives support from the European Union's Horizon 2020 research and innovation program and EFPIA.
文摘Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model.
文摘As the form of cyber threats becomes more complex,which leads to a widespread concern about how to promote network security active defense system by using the exploding cyber threat intelligence.Basing on the content analysis method,introduces the precision,recall rate and timely rate on the basis of the change of time dimension,and analyzes the threat intelligence provider from three aspects.The validity of this method is verified by the test of massive source of threat data,which improves the efficiency of CIF analysis and makes it easy to analyze and extract the threat intelligence information quickly.
基金the European Union's Horizon 2020 Programmes Under Grant Agreement(826506,sustAGE).
文摘Background Although frustration is a common emotional reaction while playing games,an excessive level of frustration can negatively impact a user's experience,discouraging them from further game interactions.The automatic detection of frustration can enable the development of adaptive systems that can adapt a game to a user's specific needs through real-time difficulty adjustment,thereby optimizing the player's experience and guaranteeing game success.To this end,we present a speech-based approach for the automatic detection of frustration during game interactions,a specific task that remains under explored in research.Method The experiments were performed on the Multimodal Game Frustration Database(MGFD),an audiovisual dataset-collected within the Wizard-of-Oz framework-that is specially tailored to investigate verbal and facial expressions of frustration during game interactions.We explored the performance of a variety of acoustic feature sets,including Mel-Spectrograms,Mel Frequency Cepstral Coefficients(MFCCs),and the low-dimensional knowledge-based acoustic feature set eGeMAPS.Because of the continual improvements in speech recognition tasks achieved by the use of convolutional neural networks(CNNs),unlike the MGFD baseline,which is based on the Long Short Term Memory(LSTM)architecture and Support Vector Machine(SVM)classifier-in the present work,we consider typical CNNs,including ResNet,VGG,and AlexNet.Furthermore,given the unresolved debate on the suitability of shallow and deep networks,we also examine the performance of two of the latest deep CNNs:WideResNet and EfficientNet.Results Our best result,achieved with WideResNet and Mel-Spectrogram features,increases the system performance from 58.8%unweighted average recall(UAR)to 93.1%UAR for speech-based automatic frustration recognition.