期刊文献+

一种随机平均分布的集成学习方法

AN ENSEMBLE LEARNING METHOD WITH RANDOM EVENLY DISTRIBUTION
下载PDF
导出
摘要 像传统机器学习一样,样本的不平衡分布会影响深度学习分类器的预测能力,在语音情感识别环境下,情感数据的不平衡分布是一种常态。基于卷积循环神经网络和注意力模型,提出一种随机平均分布的集成学习方法(Redagging),用来消除样本的不平衡分布。Redagging按照机会均等原则,等概率地把训练样例随机放入子训练样本,通过降低样例重复率提升基分类器的性能,进而增强综合分类器的预测能力。在IEMOCAP和EMODB情感数据库的实验表明,从未加权平均召回率和F1值两个方面,Redagging都优于Bagging和其他不平衡学习方法,验证了其有效性。 Like traditional machine learning,the imbalance issue will have a negative impact on the predictive ability of deep learning classifiers,and imbalanced distribution is a normal phenomenon in the context of speech emotion recognition.Based on convolutional recurrent neural network and attention model,a random evenly distributed aggregation method(Redagging)which aims at dealing with imbalance problem is proposed.According to the principle of equal opportunity,Redagging randomly and uniformly put training examples into sub samples.It improved the performance of the base learners by reducing duplicate examples,and then enhanced the predictive ability of the final learner.The experiments on the IEMOCAP and EMODB sentimental databases show that Redagging is better than Bagging and other imbalance learning methods in terms of UAR and F1 score,proving its effectiveness.
作者 艾旭升 盛胜利 李春华 Ai Xusheng;Sheng Shengli;Li Chunhua(College of Software and Service Outsourcing,Suzhou Vocational Institute of Industrial Technology,Suzhou 215104,Jiangsu,China;School of Electrics and Information Engineering,Suzhou University of Science and Technology,Suzhou 215009,Jiangsu,China)
出处 《计算机应用与软件》 北大核心 2022年第2期180-187,200,共9页 Computer Applications and Software
基金 国家自然科学基金项目(61728205)。
关键词 不平衡学习 循环神经网络 卷积神经网络 语音情感识别 Imbalance learning Recurrent neural network Convolutional neural network Speech emotion recognition
  • 相关文献

参考文献1

二级参考文献84

  • 1van Bezooijen R,Otto SA,Heenan TA. Recognition of vocal expressions of emotion:A three-nation study to identify universal characteristics[J].{H}JOURNAL OF CROSS-CULTURAL PSYCHOLOGY,1983,(04):387-406.
  • 2Tolkmitt FJ,Scherer KR. Effect of experimentally induced stress on vocal parameters[J].Journal of Experimental Psychology Human Perception Performance,1986,(03):302-313.
  • 3Cahn JE. The generation of affect in synthesized speech[J].Journal of the American Voice Input/Output Society,1990.1-19.
  • 4Moriyama T,Ozawa S. Emotion recognition and synthesis system on speech[A].Florence:IEEE Computer Society,1999.840-844.
  • 5Cowie R,Douglas-Cowie E,Savvidou S,McMahon E,Sawey M,Schro. Feeltrace:An instrument for recording perceived emotion in real time[A].Belfast:ISCA,2000.19-24.
  • 6Grimm M,Kroschel K. Evaluation of natural emotions using self assessment manikins[A].Cancun,2005.381-385.
  • 7Grimm M,Kroschel K,Narayanan S. Support vector regression for automatic recognition of spontaneous emotions in speech[A].IEEE Computer Society,2007.1085-1088.
  • 8Eyben F,Wollmer M,Graves A,Schuller B Douglas-Cowie E Cowie R. On-Line emotion recognition in a 3-D activation-valencetime continuum using acoustic and linguistic cues[J].Journal on Multimodal User Interfaces,2010,(1-2):7-19.
  • 9Giannakopoulos T,Pikrakis A,Theodoridis S. A dimensional approach to emotion recognition of speech from movies[A].Taibe:IEEE Computer Society,2009.65-68.
  • 10Wu DR,Parsons TD,Mower E,Narayanan S. Speech emotion estimation in 3d space[A].Singapore:IEEE Computer Society,2010.737-742.

共引文献170

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部