期刊文献+

基于级联投影高斯混合模型的语音与心电情绪识别(英文) 被引量:1

Cascaded projection of Gaussian mixture model for emotion recognition in speech and ECG signals
下载PDF
导出
摘要 提出了一种基于级联投影的高斯混合模型算法.首先,针对不同的特征维度计算高斯混合模型的边缘概率,依据边缘概率模型构造出多个子分类器,每个子分类器包含不同的特征组合.采用级联结构的框架对子分类器进行动态融合,从而获得对样本的自适应能力.其次,在心电情感信号和语音情感信号上验证了算法的有效性,通过实验诱发手段,采集了烦躁、喜悦、悲伤等情感数据.最后,探讨了情感特征参数(心率变异性、心电混沌特征,语句级静态特征等)的提取方法.研究了情感特征的降维方法,包括主分量分析、顺序特征选择、Fisher区分度和最大信息系数等方法.实验结果显示,所提算法能够在2种不同的场景中有效地提高情感识别的准确率. A cascaded projection of the Gaussian mixture model algorithm is proposed.First,the marginal distribution of the Gaussian mixture model is computed for different feature dimensions, and a number of sub-classifiers are generated using the marginal distribution model.Each sub-classifier is based on different feature sets.The cascaded structure is adopted to fuse the sub-classifiers dynamically to achieve sample adaptation ability.Secondly,the effectiveness of the proposed algorithm is verified on electrocardiogram emotional signal and speech emotional signal.Emotional data including fidgetiness,happiness and sadness is collected by induction experiments.Finally,the emotion feature extraction method is discussed,including heart rate variability, the chaotic electrocardiogram feature and utterance level static feature.The emotional feature reduction methods are studied, including principle component analysis,sequential forward selection, the Fisher discriminant ratio and maximal information coefficient.The experimental results show that the proposed classification algorithm can effectively improve recognition accuracy in two different scenarios.
出处 《Journal of Southeast University(English Edition)》 EI CAS 2015年第3期320-326,共7页 东南大学学报(英文版)
基金 The National Natural Science Foundation of China(No.61231002,61273266,51075068,61271359) Doctoral Fund of Ministry of Education of China(No.20110092130004)
关键词 高斯混合模型 情绪识别 样本自适应 情绪诱发 Gaussian mixture model emotion recognition sample adaptation emotion inducing
  • 相关文献

参考文献1

二级参考文献8

共引文献1

同被引文献21

  • 1淦文燕,李德毅,王建民.一种基于数据场的层次聚类方法[J].电子学报,2006,34(2):258-262. 被引量:82
  • 2Jin Q, Li C, Chen S, et al. Speech emotion recognition with acoustic and lexical features [C]//IEEE International Conference on Acoustics, Speech and Signal Processing. Brisbane, Australia, 2015: 4749 -4753.
  • 3Ramakrishnan S, EI Emary I M M. Speech emotion recognition approaches in human computer interaction [J] . Telecommunication Systems, 2013, 52( 3) : 1467 -1478.
  • 4Lu H, Frauendorfer D, Rabbi M, et al. StressSense , Detecting stress in unconstrained acoustic environments using smartphones[C]IIProceedings of the 2012 ACM Conference on Ubiquitous Computing. Pittsburgh, P A, USA, 2012: 351 - 360.
  • 5Lee J S, Shin D H. A study on the interaction between human and smart devices based on emotion recognition [C]//Communications in Computer and Information Science. Berlin: Springer, 2013: 352 - 356.
  • 6Anagnostopoulos C N, Iliou T, Giannoukos 1. Features and classifiers for emotion recognition from speech: A survey from 2000 to 2011 [J]. Artificial Intelligence Review, 2015, 43(2): 155 -177.
  • 7Inga1e A B, Chaudhari D S. Speech emotion recognition [J]. International Journal of Soft Computing and Engineering, 2012, 2(1) : 235 - 238.
  • 8Lanjewar R B, Chaudhari D S. Speech emotion recognition: a review [J]. International Journal of Innovative Technology and Exploring Engineering, 2013, 2 ( 4) : 68 -71.
  • 9W611mer M, Schuller B, Eyben F, et al. Combining long short-term memory and dynamic bayesian networks for incremental emotion-sensitive artificial listening [J] . IEEE Journal of Selected Topics in Signal Processing, 2010, 4 (5): 867 - 881.
  • 10Gharavian D, Sheikhan M, Nazerieh A, et al. Speech emotion recognition using FCBF feature selection method and GA-optimized fuzzy ARTMAP neural network [J]. Neural Computing and Applications, 2012, 21 (8) : 2115 -2126.

引证文献1

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部