期刊文献+

Multimodal spontaneous affect recognition using neural networks learned with hints

Multimodal spontaneous affect recognition using neural networks learned with hints
下载PDF
导出
摘要 A multimodal fusion classifier is presented based on neural networks (NNs) learned with hints for automatic spontaneous affect recognition. In case that different channels can provide com- plementary information, features are utilized from four behavioral cues: frontal-view facial expres- sion, profile-view facial expression, shoulder movement, and vocalization (audio). NNs are used in both single cue processing and multimodal fusion. Coarse categories and quadrants in the activation- evaluation dimensional space are utilized respectively as the heuristic information (hints) of NNs during training, aiming at recognition of basic emotions. With the aid of hints, the weights in NNs could learn optimal feature groupings and the subtlety and complexity of spontaneous affective states could be better modeled. The proposed method requires low computation effort and reaches high recognition accuracy, even if the training data is insufficient. Experiment results on the Semaine nat- uralistic dataset demonstrate that our method is effective and promising. A multimodal fusion classifier is presented based on neural networks (NNs) learned with hints for automatic spontaneous affect recognition. In case that different channels can provide com- plementary information, features are utilized from four behavioral cues: frontal-view facial expres- sion, profile-view facial expression, shoulder movement, and vocalization (audio). NNs are used in both single cue processing and multimodal fusion. Coarse categories and quadrants in the activation- evaluation dimensional space are utilized respectively as the heuristic information (hints) of NNs during training, aiming at recognition of basic emotions. With the aid of hints, the weights in NNs could learn optimal feature groupings and the subtlety and complexity of spontaneous affective states could be better modeled. The proposed method requires low computation effort and reaches high recognition accuracy, even if the training data is insufficient. Experiment results on the Semaine nat- uralistic dataset demonstrate that our method is effective and promising.
作者 张欣 吕坤
机构地区 School of Software
出处 《Journal of Beijing Institute of Technology》 EI CAS 2014年第1期117-125,共9页 北京理工大学学报(英文版)
基金 Supported by the National Natural Science Foundation of China(60905006) the Basic Research Fund of Beijing Institute ofTechnology(20120842006)
关键词 affect recognition multimodal fusion neural network learned with hints spontaneousaffect affect recognition multimodal fusion neural network learned with hints spontaneousaffect
  • 相关文献

参考文献24

  • 1Pantic M, Rothkrantz L J M. Automatic analysis of facial expressions : the state of the art [ J ]. IEEE Trans on PAMI, 2000,22(12) : 1424 - 1445.
  • 2Fasel B, Luttin J. Automatic facial expression anal- ysis: a survey [ J ]. Pattern Recognition, 2003,36 (1): 259-275.
  • 3Zeng Z, Pantic M, Roisman G I, et al. A survey of affect recognition methods : audio, visual, and spontaneous expressions[J]. IEEE Trans on PAMI, 2009,31 ( 1 ) :39 -58.
  • 4Gunes H, Schuller B, Pantic M, et al. Emotion rep- resentation, analysis and synthesis in continuous space : a survey[ C ]//2011 IEEE International Con- ference on Automatic Face & Gesture Recognition and Workshops (FG 2011 ), Santa Barbara, CA, USA,2011:827 - 834.
  • 5Ekman P, Friesen W V. Constants across cultures in the face and emotion [ J ]. J Personality Social Psy- chol,1971, 17(2) : 124 - 129.
  • 6Cowie R, Douglas-Cowie E, Tsapatsoulis N, et al. Emotion recognition in human-computer interaction [J]. IEEE Signal Processing Magazine, 2001, 18 (1): 32-80.
  • 7Scherer K R. Appraisal theory. Handbook of cogni- tion and emotion [ M ]. Dalgleish T, Power M J, eds. Hoboken, NJ, USA: John Wiley &Sons, 1999:637 -663.
  • 8Zeng Zhihong, Zhang Zhenqiu, Pianfetti Brian, et al. Audio-visual affect recognition in activation-eval- uation space[ C]//Proc 13th ACM Int' 1 Conf Multi- media ( Multimedia ' 05 ), Singapore, 2005 : 828 - 831.
  • 9Schuller B, Muller R, Hornier B, et al. Audiovisual recognition of spontaneous interest within conversa- tions[ C]//Proc Ninth ACM Int' 1 Conf. Multimodal Interfaces ( ICMI ' 07 ), Nagoya, Japan, 2007 : 30 - 37.
  • 10Pal P, Iyer A N, Yantomo R E. Emotion detection from infant facial expressions and cries [ C ]//Proc IEEE Int' 1 Conf. Acoustics, Speech and Signal Processing (ICASSP'06), Toulouse, France, 2006, 2:721 -724.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部