期刊文献+
共找到404篇文章
< 1 2 21 >
每页显示 20 50 100
Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition
1
作者 Liya Yue Pei Hu +1 位作者 Shu-Chuan Chu Jeng-Shyang Pan 《Computers, Materials & Continua》 SCIE EI 2024年第2期1957-1975,共19页
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext... Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER. 展开更多
关键词 speech emotion recognition filter-wrapper HIGH-DIMENSIONAL feature selection equilibrium optimizer MULTI-OBJECTIVE
下载PDF
Exploring Sequential Feature Selection in Deep Bi-LSTM Models for Speech Emotion Recognition
2
作者 Fatma Harby Mansor Alohali +1 位作者 Adel Thaljaoui Amira Samy Talaat 《Computers, Materials & Continua》 SCIE EI 2024年第2期2689-2719,共31页
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona... Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field. 展开更多
关键词 Artificial intelligence application multi features sequential selection speech emotion recognition deep Bi-LSTM
下载PDF
Multilayer Neural Network Based Speech Emotion Recognition for Smart Assistance 被引量:2
3
作者 Sandeep Kumar MohdAnul Haq +4 位作者 Arpit Jain C.Andy Jason Nageswara Rao Moparthi Nitin Mittal Zamil S.Alzamil 《Computers, Materials & Continua》 SCIE EI 2023年第1期1523-1540,共18页
Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect huma... Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model. 展开更多
关键词 speech emotion recognition classifier implementation feature extraction and selection smart assistance
下载PDF
A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition 被引量:1
4
作者 Peizhu Gong Jin Liu +3 位作者 Zhongdai Wu Bing Han YKenWang Huihua He 《Computers, Materials & Continua》 SCIE EI 2023年第2期4203-4220,共18页
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due... Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach. 展开更多
关键词 speech emotion recognition self-supervised embedding model cross-modal transformer self-attention
下载PDF
Improved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation
5
作者 Akmalbek Abdusalomov Alpamis Kutlimuratov +1 位作者 Rashid Nasimov Taeg Keun Whangbo 《Computers, Materials & Continua》 SCIE EI 2023年第12期2915-2933,共19页
The performance of a speech emotion recognition(SER)system is heavily influenced by the efficacy of its feature extraction techniques.The study was designed to advance the field of SER by optimizing feature extraction... The performance of a speech emotion recognition(SER)system is heavily influenced by the efficacy of its feature extraction techniques.The study was designed to advance the field of SER by optimizing feature extraction tech-niques,specifically through the incorporation of high-resolution Mel-spectrograms and the expedited calculation of Mel Frequency Cepstral Coefficients(MFCC).This initiative aimed to refine the system’s accuracy by identifying and mitigating the shortcomings commonly found in current approaches.Ultimately,the primary objective was to elevate both the intricacy and effectiveness of our SER model,with a focus on augmenting its proficiency in the accurate identification of emotions in spoken language.The research employed a dual-strategy approach for feature extraction.Firstly,a rapid computation technique for MFCC was implemented and integrated with a Bi-LSTM layer to optimize the encoding of MFCC features.Secondly,a pretrained ResNet model was utilized in conjunction with feature Stats pooling and dense layers for the effective encoding of Mel-spectrogram attributes.These two sets of features underwent separate processing before being combined in a Convolutional Neural Network(CNN)outfitted with a dense layer,with the aim of enhancing their representational richness.The model was rigorously evaluated using two prominent databases:CMU-MOSEI and RAVDESS.Notable findings include an accuracy rate of 93.2%on the CMU-MOSEI database and 95.3%on the RAVDESS database.Such exceptional performance underscores the efficacy of this innovative approach,which not only meets but also exceeds the accuracy benchmarks established by traditional models in the field of speech emotion recognition. 展开更多
关键词 Feature extraction MFCC ResNet speech emotion recognition
下载PDF
Design of Hierarchical Classifier to Improve Speech Emotion Recognition
6
作者 P.Vasuki 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期19-33,共15页
Automatic Speech Emotion Recognition(SER)is used to recognize emotion from speech automatically.Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influen... Automatic Speech Emotion Recognition(SER)is used to recognize emotion from speech automatically.Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influenced by the variations in gender,age,the cultural and acoustical background of the speaker.The acoustical resemblance between emotional expressions further increases the complexity of recognition.Many recent research works are concentrated to address these effects individually.Instead of addressing every influencing attribute individually,we would like to design a system,which reduces the effect that arises on any factor.We propose a two-level Hierarchical classifier named Interpreter of responses(IR).Thefirst level of IR has been realized using Support Vector Machine(SVM)and Gaussian Mixer Model(GMM)classifiers.In the second level of IR,a discriminative SVM classifier has been trained and tested with meta information offirst-level classifiers along with the input acoustical feature vector which is used in primary classifiers.To train the system with a corpus of versatile nature,an integrated emotion corpus has been composed using emotion samples of 5 speech corpora,namely;EMO-DB,IITKGP-SESC,SAVEE Corpus,Spanish emotion corpus,CMU's Woogle corpus.The hierarchical classifier has been trained and tested using MFCC and Low-Level Descriptors(LLD).The empirical analysis shows that the proposed classifier outperforms the traditional classifiers.The proposed ensemble design is very generic and can be adapted even when the number and nature of features change.Thefirst-level classifiers GMM or SVM may be replaced with any other learning algorithm. 展开更多
关键词 speech emotion recognition hierarchical classifier design ENSEMBLE emotion speech corpora
下载PDF
Using Speaker-Specific Emotion Representations in Wav2vec 2.0-Based Modules for Speech Emotion Recognition
7
作者 Somin Park Mpabulungi Mark +1 位作者 Bogyung Park Hyunki Hong 《Computers, Materials & Continua》 SCIE EI 2023年第10期1009-1030,共22页
Speech emotion recognition is essential for frictionless human-machine interaction,where machines respond to human instructions with context-aware actions.The properties of individuals’voices vary with culture,langua... Speech emotion recognition is essential for frictionless human-machine interaction,where machines respond to human instructions with context-aware actions.The properties of individuals’voices vary with culture,language,gender,and personality.These variations in speaker-specific properties may hamper the performance of standard representations in downstream tasks such as speech emotion recognition(SER).This study demonstrates the significance of speaker-specific speech characteristics and how considering them can be leveraged to improve the performance of SER models.In the proposed approach,two wav2vec-based modules(a speaker-identification network and an emotion classification network)are trained with the Arcface loss.The speaker-identification network has a single attention block to encode an input audio waveform into a speaker-specific representation.The emotion classification network uses a wav2vec 2.0-backbone as well as four attention blocks to encode the same input audio waveform into an emotion representation.These two representations are then fused into a single vector representation containing emotion and speaker-specific information.Experimental results showed that the use of speaker-specific characteristics improves SER performance.Additionally,combining these with an angular marginal loss such as the Arcface loss improves intra-class compactness while increasing inter-class separability,as demonstrated by the plots of t-distributed stochastic neighbor embeddings(t-SNE).The proposed approach outperforms previous methods using similar training strategies,with a weighted accuracy(WA)of 72.14%and unweighted accuracy(UA)of 72.97%on the Interactive Emotional Dynamic Motion Capture(IEMOCAP)dataset.This demonstrates its effectiveness and potential to enhance human-machine interaction through more accurate emotion recognition in speech. 展开更多
关键词 Attention block IEMOCAP dataset speaker-specific representation speech emotion recognition wav2vec 2.0
下载PDF
Performance Analysis of a Chunk-Based Speech Emotion Recognition Model Using RNN
8
作者 Hyun-Sam Shin Jun-Ki Hong 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期235-248,共14页
Recently,artificial-intelligence-based automatic customer response sys-tem has been widely used instead of customer service representatives.Therefore,it is important for automatic customer service to promptly recognize... Recently,artificial-intelligence-based automatic customer response sys-tem has been widely used instead of customer service representatives.Therefore,it is important for automatic customer service to promptly recognize emotions in a customer’s voice to provide the appropriate service accordingly.Therefore,we analyzed the performance of the emotion recognition(ER)accuracy as a function of the simulation time using the proposed chunk-based speech ER(CSER)model.The proposed CSER model divides voice signals into 3-s long chunks to effi-ciently recognize characteristically inherent emotions in the customer’s voice.We evaluated the performance of the ER of voice signal chunks by applying four RNN techniques—long short-term memory(LSTM),bidirectional-LSTM,gated recurrent units(GRU),and bidirectional-GRU—to the proposed CSER model individually to assess its ER accuracy and time efficiency.The results reveal that GRU shows the best time efficiency in recognizing emotions from speech signals in terms of accuracy as a function of simulation time. 展开更多
关键词 RNN speech emotion recognition attention mechanism time efficiency
下载PDF
Enhancing Human-Machine Interaction:Real-Time Emotion Recognition through Speech Analysis
9
作者 Dominik Esteves de Andrade Rüdiger Buchkremer 《Journal of Computer Science Research》 2023年第3期22-45,共24页
Humans,as intricate beings driven by a multitude of emotions,possess a remarkable ability to decipher and respond to socio-affective cues.However,many individuals and machines struggle to interpret such nuanced signal... Humans,as intricate beings driven by a multitude of emotions,possess a remarkable ability to decipher and respond to socio-affective cues.However,many individuals and machines struggle to interpret such nuanced signals,including variations in tone of voice.This paper explores the potential of intelligent technologies to bridge this gap and improve the quality of conversations.In particular,the authors propose a real-time processing method that captures and evaluates emotions in speech,utilizing a terminal device like the Raspberry Pi computer.Furthermore,the authors provide an overview of the current research landscape surrounding speech emotional recognition and delve into our methodology,which involves analyzing audio files from renowned emotional speech databases.To aid incomprehension,the authors present visualizations of these audio files in situ,employing dB-scaled Mel spectrograms generated through TensorFlow and Matplotlib.The authors use a support vector machine kernel and a Convolutional Neural Network with transfer learning to classify emotions.Notably,the classification accuracies achieved are 70% and 77%,respectively,demonstrating the efficacy of our approach when executed on an edge device rather than relying on a server.The system can evaluate pure emotion in speech and provide corresponding visualizations to depict the speaker’s emotional state in less than one second on a Raspberry Pi.These findings pave the way for more effective and emotionally intelligent human-machine interactions in various domains. 展开更多
关键词 speech emotion recognition Edge computing Real-time computing Raspberry Pi
下载PDF
A Multi-Modal Deep Learning Approach for Emotion Recognition
10
作者 H.M.Shahzad Sohail Masood Bhatti +1 位作者 Arfan Jaffar Muhammad Rashid 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1561-1570,共10页
In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the ma... In recent years,research on facial expression recognition(FER)under mask is trending.Wearing a mask for protection from Covid 19 has become a compulsion and it hides the facial expressions that is why FER under the mask is a difficult task.The prevailing unimodal techniques for facial recognition are not up to the mark in terms of good results for the masked face,however,a multi-modal technique can be employed to generate better results.We proposed a multi-modal methodology based on deep learning for facial recognition under a masked face using facial and vocal expressions.The multimodal has been trained on a facial and vocal dataset.We have used two standard datasets,M-LFW for the masked dataset and CREMA-D and TESS dataset for vocal expressions.The vocal expressions are in the form of audio while the faces data is in image form that is why the data is heterogenous.In order to make the data homogeneous,the voice data is converted into images by taking spectrogram.A spectrogram embeds important features of the voice and it converts the audio format into the images.Later,the dataset is passed to the multimodal for training.neural network and the experimental results demonstrate that the proposed multimodal algorithm outsets unimodal methods and other state-of-the-art deep neural network models. 展开更多
关键词 Deep learning facial expression recognition multi-model neural network speech emotion recognition SPECTROGRAM covid-19
下载PDF
TC-Net:A Modest&Lightweight Emotion Recognition System Using Temporal Convolution Network
11
作者 Muhammad Ishaq Mustaqeem Khan Soonil Kwon 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3355-3369,共15页
Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluatio... Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluation,which is applicable in many real-world applications such as healthcare,call centers,robotics,safety,and virtual reality.This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state.The authors designed a Temporal Convolutional Network(TCN)core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification.The proposed network extracts valid sequential cues automatically from speech signals,which performed better than state-of-the-art(SOTA)and traditional machine learning algorithms.Results of the proposed method show a high recognition rate compared with SOTAmethods.The final unweighted accuracy of 80.84%,and 92.31%,for interactive emotional dyadic motion captures(IEMOCAP)and berlin emotional dataset(EMO-DB),indicate the robustness and efficiency of the designed model. 展开更多
关键词 Affective computing deep learning emotion recognition speech signal temporal convolutional network
下载PDF
SPEECH EMOTION RECOGNITION USING MODIFIED QUADRATIC DISCRIMINATION FUNCTION 被引量:9
12
作者 Zhao Yan Zhao Li Zou Cairong Yu Yinhua 《Journal of Electronics(China)》 2008年第6期840-844,共5页
Quadratic Discrimination Function (QDF) is commonly used in speech emotion recognition, which proceeds on the premise that the input data is normal distribution. In this paper, we propose a transformation to normalize... Quadratic Discrimination Function (QDF) is commonly used in speech emotion recognition, which proceeds on the premise that the input data is normal distribution. In this paper, we propose a transformation to normalize the emotional features, then derivate a Modified QDF (MQDF) to speech emotion recognition. Features based on prosody and voice quality are extracted and Principal Component Analysis Neural Network (PCANN) is used to reduce dimension of the feature vectors. The results show that voice quality features are effective supplement for recognition, and the method in this paper could improve the recognition ratio effectively. 展开更多
关键词 语言情绪识别 神经网络 组成分析 二次方程识别函数
下载PDF
EMOTIONAL SPEECH RECOGNITION BASED ON SVM WITH GMM SUPERVECTOR 被引量:1
13
作者 Chen Yanxiang Xie Jian 《Journal of Electronics(China)》 2012年第3期339-344,共6页
Emotion recognition from speech is an important field of research in human computer interaction. In this letter the framework of Support Vector Machines (SVM) with Gaussian Mixture Model (GMM) supervector is introduce... Emotion recognition from speech is an important field of research in human computer interaction. In this letter the framework of Support Vector Machines (SVM) with Gaussian Mixture Model (GMM) supervector is introduced for emotional speech recognition. Because of the importance of variance in reflecting the distribution of speech, the normalized mean vectors potential to exploit the information from the variance are adopted to form the GMM supervector. Comparative experiments from five aspects are conducted to study their corresponding effect to system performance. The experiment results, which indicate that the influence of number of mixtures is strong as well as influence of duration is weak, provide basis for the train set selection of Universal Background Model (UBM). 展开更多
关键词 emotional speech recognition Support Vector Machines (SVM) Gaussian Mixture Model (GMM) supervector Universal Background Model (USB)
下载PDF
Self-attention transfer networks for speech emotion recognition 被引量:3
14
作者 Ziping ZHAO Keru Wang +6 位作者 Zhongtian BAO Zixing ZHANG Nicholas CUMMINS Shihuang SUN Haishuai WANG Jianhua TAO Björn WSCHULLER 《Virtual Reality & Intelligent Hardware》 2021年第1期43-54,共12页
Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in s... Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model. 展开更多
关键词 speech emotion recognition Attention transfer Self-attention Temporal convolutional neural networks(TCNs)
下载PDF
Multi-scale discrepancy adversarial network for cross-corpus speech emotion recognition 被引量:2
15
作者 Wanlu ZHENG Wenming ZHENG Yuan ZONG 《Virtual Reality & Intelligent Hardware》 2021年第1期65-75,共11页
Background One of the most critical issues in human-computer interaction applications is recognizing human emotions based on speech.In recent years,the challenging problem of cross-corpus speech emotion recognition(SE... Background One of the most critical issues in human-computer interaction applications is recognizing human emotions based on speech.In recent years,the challenging problem of cross-corpus speech emotion recognition(SER)has generated extensive research.Nevertheless,the domain discrepancy between training data and testing data remains a major challenge to achieving improved system performance.Methods This paper introduces a novel multi-scale discrepancy adversarial(MSDA)network for conducting multiple timescales domain adaptation for cross-corpus SER,i.e.,integrating domain discriminators of hierarchical levels into the emotion recognition framework to mitigate the gap between the source and target domains.Specifically,we extract two kinds of speech features,i.e.,handcraft features and deep features,from three timescales of global,local,and hybrid levels.In each timescale,the domain discriminator and the feature extrator compete against each other to learn features that minimize the discrepancy between the two domains by fooling the discriminator.Results Extensive experiments on cross-corpus and cross-language SER were conducted on a combination dataset that combines one Chinese dataset and two English datasets commonly used in SER.The MSDA is affected by the strong discriminate power provided by the adversarial process,where three discriminators are working in tandem with an emotion classifier.Accordingly,the MSDA achieves the best performance over all other baseline methods.Conclusions The proposed architecture was tested on a combination of one Chinese and two English datasets.The experimental results demonstrate the superiority of our powerful discriminative model for solving cross-corpus SER. 展开更多
关键词 Human-computer interaction Cross-corpus speech emotion recognition Hierarchical discri minators Domain adaptation
下载PDF
Feature Optimization of Speech Emotion Recognition
16
作者 Chunxia Yu Ling Xie Weiping Hu 《Journal of Biomedical Science and Engineering》 2016年第10期37-43,共8页
Speech emotion is divided into four categories, Fear, Happy, Neutral and Surprise in this paper. Traditional features and their statistics are generally applied to recognize speech emotion. In order to quantify each f... Speech emotion is divided into four categories, Fear, Happy, Neutral and Surprise in this paper. Traditional features and their statistics are generally applied to recognize speech emotion. In order to quantify each feature’s contribution to emotion recogni-tion, a method based on the Back Propagation (BP) neural network is adopted. Then we can obtain the optimal subset of the features. What’s more, two new characteristics of speech emotion, MFCC feature extracted from the fundamental frequency curve (MFCCF0) and amplitude perturbation parameters extracted from the short- time av-erage magnitude curve (APSAM), are added to the selected features. With the Gaus-sian Mixture Model (GMM), we get the highest average recognition rate of the four emotions 82.25%, and the recognition rate of Neutral 90%. 展开更多
关键词 speech emotion recognition Feature Selection Feature Extraction BP Neural Network GMM
下载PDF
Towards Realizing Sign Language to Emotional Speech Conversion by Deep Learning
17
作者 Nan Song Hongwu Yang Pengpeng Zhi 《国际计算机前沿大会会议论文集》 2018年第2期34-34,共1页
关键词 SIGN LANGUAGE recognition FACIAL expression recognitionDeep Neural Network emotionAL speech synthesisSign LANGUAGE to speech CONVERSION
下载PDF
基于小波散射变换和MFCC的双特征语音情感识别融合算法
18
作者 应娜 吴顺朋 +1 位作者 杨萌 邹雨鉴 《电信科学》 北大核心 2024年第5期62-72,共11页
为了充分挖掘语音信号频谱包含的情感信息以提高语音情感识别的准确性,提出了一种基于小波散射变换和梅尔频率倒谱系数(Mel-frequency cepstral coefficient,MFCC)的排列熵加权和偏差调整规则的语音情感识别融合算法(PEW-BAR)。算法首... 为了充分挖掘语音信号频谱包含的情感信息以提高语音情感识别的准确性,提出了一种基于小波散射变换和梅尔频率倒谱系数(Mel-frequency cepstral coefficient,MFCC)的排列熵加权和偏差调整规则的语音情感识别融合算法(PEW-BAR)。算法首先获取语音信号的小波散射特征和梅尔频率倒谱系数的相关特征;然后按尺度维度扩展小波散射特征,利用支持向量机得到情感识别的后验概率并获得排列熵,并使用排列熵对后验概率进行加权;最后采用一种偏差调整规则进一步融合MFCC的相关特征的识别结果。实验结果表明,在EMODB、RAVDESS和eNTERFACE05数据集上,与传统的基于小波散射系数的语音情感识别方法相比,该算法将ACC分别提高了2.82%、2.85%和5.92%,将UAR分别提升了3.40%、2.87%和5.80%,IEMOCAP上提高了6.89%。 展开更多
关键词 语音情感识别 小波散射变换 排列熵 MFCC 模型融合
下载PDF
音乐疗法对老年性聋患者言语识别能力及负面情绪的影响
19
作者 刘亚珍 刘烨松 仇顺锋 《中国听力语言康复科学杂志》 2024年第3期290-293,共4页
目的 分析音乐疗法对老年性聋患者负面情绪、言语识别能力的影响。方法 纳入2021年1月~2023年3月我院接纳的老年性聋患者80例,随机分为对照组(40例,言语识别能力训练)与音乐组(40例,言语识别能力训练+音乐疗法),训练前后比较两组患者听... 目的 分析音乐疗法对老年性聋患者负面情绪、言语识别能力的影响。方法 纳入2021年1月~2023年3月我院接纳的老年性聋患者80例,随机分为对照组(40例,言语识别能力训练)与音乐组(40例,言语识别能力训练+音乐疗法),训练前后比较两组患者听觉功能、负面情绪、认知功能及生活质量。结果 训练后音乐组平均听阈、老年听力障碍筛查量表(hearing handicap inventory for the elderly-screening,HHIE-S)评分显著低于对照组(P<0.05);音乐组焦虑自评量表(self-rating anxiety scale,SAS)、抑郁自评量表(self-rating depression scale,SDS)评分显著低于对照组(P<0.05);音乐组简易精神状态评价量表(mini-mentalstateexamination,MMSE)、蒙特利尔认知评估量表(montrealcognitive assessment,MoCA)评分显著高于对照组(P<0.05);音乐组生活质量综合评定问卷-74(generic quality of life inventory-74,GQOLI-74)各维度评分(物质、社会、躯体、心理)显著高于对照组(P<0.05)。结论 对老年性聋患者采用音乐疗法可显著改善言语识别能力,减轻患者负面情绪,提高其认知功能及生活质量。 展开更多
关键词 音乐疗法 言语识别能力 负面情绪 生活质量 认知功能
下载PDF
普通话多模态情感语音数据库构建与评测
20
作者 李良琦 张雪英 +3 位作者 段淑斐 肖仲喆 贾海蓉 梁慧芝 《复旦学报(自然科学版)》 CAS CSCD 北大核心 2024年第1期18-31,共14页
本文设计并建立了一个包含发音运动学、声学、声门和面部微表情的多模态情感语音汉语普通话数据库,分别从语料设计、被试选择、录制细节和数据处理等环节进行了详细的描述,其中信号被标记为离散情感标签(中性、愉悦、高兴、冷漠、愤怒... 本文设计并建立了一个包含发音运动学、声学、声门和面部微表情的多模态情感语音汉语普通话数据库,分别从语料设计、被试选择、录制细节和数据处理等环节进行了详细的描述,其中信号被标记为离散情感标签(中性、愉悦、高兴、冷漠、愤怒、忧伤、悲痛)和维度情感标签(愉悦度、激活度、优势度)。本文对维度标注的数据进行统计学分析,验证标注的有效性,同时验证标注者的SCL-90量表数据并与PAD标注数据结合后进行分析,探究标注中存在的离群现象与标注者心理状况之间的内在联系。为验证该数据库的语音质量和情感区分度,本文使用SVM、CNN、DNN3种基础模型计算了7种情感的识别率。结果显示,单独使用声学数据时7种情感的平均识别率达到了82.56%;单独使用声门数据时平均识别率达到了72.51%;单独使用运动学数据时平均识别率也达到了55.67%。因此,该数据库具有较高的质量,能够作为语音分析研究的重要来源,尤其是多模态情感语音分析的任务。 展开更多
关键词 情感语音数据库 多模态情感识别 维度情感空间 三维电磁发音仪 电子声门仪
下载PDF
上一页 1 2 21 下一页 到第
使用帮助 返回顶部