期刊文献+

基于人脸图像和脑电的连续情绪识别方法 被引量:4

Continuous Emotion Recognition Based on Facial Expressions and EEG
下载PDF
导出
摘要 基于多模态生理数据的连续情绪识别技术在多个领域有重要用途,但碍于被试数据的缺乏和情绪的主观性,情绪识别模型的训练仍需更多的生理模态数据,且依赖于同源被试数据.本文基于人脸图像和脑电提出了多种连续情绪识别方法.在人脸图像模态,为解决人脸图像数据集少而造成的过拟合问题,本文提出了利用迁移学习技术训练的多任务卷积神经网络模型.在脑电信号模态,本文提出了两种情绪识别模型:第一个是基于支持向量机的被试依赖型模型,当测试数据与训练数据同源时有较高准确率;第二个是为降低脑电信号的个体差异性和非平稳特性对情绪识别的影响而提出的跨被试型模型,该模型基于长短时记忆网络,在测试数据和训练数据不同源的情况下也具有稳定的情绪识别性能.为提高对同源数据的情绪识别准确率,本文提出两种融合多模态决策层情绪信息的方法:枚举权重方法和自适应增强方法.实验表明:当测试数据与训练数据同源时,在最佳情况下,双模态情绪识别模型在情绪唤醒度维度和效价维度的平均准确率分别达74.23%和80.30%;而当测试数据与训练数据不同源时,长短时记忆网络跨被试型模型在情绪唤醒度维度和效价维度的准确率分别为58.65%和51.70%. Continuous emotion recognition based on multimodal physiological data plays an important role in many fields.However,it needs more physiological data to train emotion recognition models due to the lack of subjects’data and subjectivity of emotion,and it is largely affected by homologous subjects’data.In this study,we propose multiple emotion recognition methods based on facial expressions and EEG.Regarding the modality of facial images,we propose a multi-task convolutional neural network trained by transfer learning to avoid over-fitting induced by small datasets of facial images.With respect to the modality of EEG,we propose two emotion recognition models.The first is a subjectdependent model based on support vector machine,possessing high accuracy when the validation and training data are homogeneous.The second is a cross-subject model for reducing the impact caused by the individual variation and nonstationarity of EEG.It is based on a long short-term memory network,performing stably under the circumstance that validation and training data are heterogeneous.To improve the accuracy of emotion recognition for homogeneous data,we propose two methods for decision-level fusion of multimodal emotion prediction:Weight enumeration and adaptive boost.According to the experiments,when the validation and training data are homogeneous,under the best circumstance,the average accuracy that multimodal emotion recognition models reached in both arousal and valence dimensions were 74.23%and 80.30%;as the validation and training data are heterogeneous,the accuracy that the cross-subject model reached in both arousal and valence dimensions are 58.65%and 51.70%.
作者 李瑞新 蔡兆信 王冰冰 潘家辉 LI Rui-Xin;CAI Zhao-Xin;WANG Bing-Bing;PAN Jia-Hui(School of Software,South China Normal University,Foshan 528225,China)
出处 《计算机系统应用》 2021年第2期1-11,共11页 Computer Systems & Applications
基金 广州市科技计划重点领域研发计划(202007030005) 广东省自然科学基金面上项目(2019A1515011375) 广东大学生科技创新培育专项资金(“攀登计划”专项资金)(pdjh2020a0145)。
关键词 连续情绪识别 迁移学习 多任务卷积神经网络 跨被试型模型 长短时记忆网络 决策层信息融合 continuous emotion recognition transfer learning multi-task convolutional neural network Long Short-Term Memory(LSTM)network cross-subject model decision-level fusion
  • 相关文献

参考文献5

二级参考文献49

  • 1陈健,周利莉,史红刚,苏大伟.一种基于Haar小波变换的彩色图像人脸检测方法[J].微计算机信息,2005,21(10S):157-159. 被引量:15
  • 2薛雨丽,毛峡,张帆.BHU人脸表情数据库的设计与实现[J].北京航空航天大学学报,2007,33(2):224-228. 被引量:20
  • 3Ekman P, Rolls E T, Perrett D I, et al. Facial Expressions of Emotion: An Old Controversy and New Findings. Philosophical Transactions : Biological Sciences, 1992, 335 (273) : 63 - 69
  • 4Tian Yinli, Kanade T, Cohn J. Recognizing Action Units for Facial Expression Analysis. IEEE Trans on Pattern Analysis and Machine Intelligence, 2001, 23(2): 97- 115
  • 5Franeo L, Treves A. A Neural Network Facial Expression Recognition System Using Unsupervised Local Processing//Proc of the International Symposium on Image and Signal Processing and Analysis. Pula, Croatia, 2001 : 628 -632
  • 6Cohen I, Sebe N, Garg A, et al. Facial Expression Recognition from Video Sequences: Temporal and Static Modeling. Computer Vision and Image Understanding, 2003, 91 ( 1 ) : 160 - 187
  • 7Fasel B, Luettin J. Automatic Facial Expression Analysis: A Survey. Pattern Recognition, 2003, 36 ( 1 ) : 259 - 275
  • 8Vasilescu M A O, Terzopoulos D. Multilinear Analysis of Image Ensembles: TensorFaces// Proc of the European Conference on Computer Vision. Copenhagen, Denmark, 2002, I : 447 - 460
  • 9Gralewski L, Campbell N, Penton-Voak I, et al. Using a Tensor Framework for the Analysis of Facial Dynamics //Proc of the 7th International Conference of Automatic Face and Gesture Recognition. Southampton, UK, 2006:217 -222
  • 10Hu Changbo, Chang Ya, Feris R, et al. Manifold Based Analysis of Facial Expression//Proc of the Conference on Computer Vision and Pattern Recognition. Washington, USA, 2004, V : 81 -87

共引文献119

同被引文献39

引证文献4

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部