摘要
为了有效特征提取与融合提高语音情感识别率,提出了一种使用主辅网络进行深度特征融合的语音情感识别算法。首先将段特征输入BLSTM-Attention网络作为主网络,其中注意力机制能够关注语音信号中的情感信息;然后,把Mel语谱图输入CNN-GAP网络作为辅助网络,GAP可以减轻全连接层带来的过拟合;最后,将两个网络提取的深度特征以主辅网络方式进行特征融合,解决不同类型特征直接融合带来的识别结果不理想的问题。在IEMOCAP数据集上对比4种模型的实验结果表明,使用主辅网络深度特征融合的WA和UA均有不同程度的提高。
Speech emotion recognition is an important research direction of human-computer interaction.Effective feature extraction and fusion are among the key factors to improve the rate of speech emotion recognition.In this paper,a speech emotion recognition algorithm using Main-auxiliary networks for deep feature fusion was proposed.First,segment features are input into BLSTM-attention network as the main network.The attention mechanism can pay attention to the emotion information in speech signals.Then,the Mel spectrum features are input into Convolutional Neural Networks-Global Average Pooling(GAP)as auxiliary network.GAP can reduce the overfitting brought by the fully connected layer.Finally,the two are combined in the form of Main-auxiliary networks to solve the problem of unsatisfactory recognition results caused by direct fusion of different types of features.The experimental results of comparing four models on IEMOCAP dataset show that WA and UA using the depth feature fusion of the Main-Auxiliary network are improved to different degrees.
作者
胡德生
张雪英
张静
李宝芸
HU Desheng;ZHANG Xueying;ZHANG Jing;LI Baoyun(College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China)
出处
《太原理工大学学报》
CAS
北大核心
2021年第5期769-774,共6页
Journal of Taiyuan University of Technology
基金
国家自然科学基金资助项目(61371193)
山西省回国留学人员科研资助项目(HGKY2019025)
山西省研究生教育创新计划项目(2020BY130)。
关键词
语音情感识别
主辅网络
长短时记忆单元
卷积神经网络
speech emotion recognition
main-auxiliary network
long-short term memory
convolutional neural network