摘要
目的情绪识别是智能人机交互的重要环节。目前的研究常用语音、面部表情和脑电波(Electroencephalogram,EEG)等信号通过机器学习算法实现情绪识别。然而语音情绪识别只是外在的情绪状态感知,具有可伪装性;而EEG信号复杂度高且十分微弱。针对以上问题,本文探究了使用多源数据融合进行情绪识别的新方法。方法对语音和EEG信号分别在特征级别和算法设计上实现对语音和EEG信号的数据融合,使用SVM和多核SVM进行四类情绪的分类。对比单数据和多源数据融合的情绪识别精度。结果使用多核SVM对两类数据融合得到的分类精度高于使用特征融合方法,最高高出22.47%。结论使用多源异构数据能够提高情绪识别效果,多核分类器适用于异构数据的分类。以上研究结果促进了情绪识别在自然环境中的应用。
Objective Emotion recognition is an important part of intelligent human-machine interaction.At present,speech,facial expression and Electroencephalogram signals are commonly used to recognize emotion by machine learning algorithm.However,speech signalcan only percept the external emotion state,which can be pretended.EEG signal is complex and weak.In view of the above problems,this paper explores a new method of emotion recognition using multi-source data fusion.Methods For speech and EEG signals,data fusion is realized at feature extraction and algorithm design respectively.SVM and multi-kernel SVM are used to classify four kinds of emotions.Comparing the accuracy of emotion recognition between single data and multiple data fusion.Results The classification accuracy of two kinds of data fusion using multi-core SVM is higher than that using feature fusion method,the highest is 22.47%.Conclusion Using multi-source heterogeneous data can improve the effect of emotion recognition.Multi-core classifier is suitable for heterogeneous data classification.The above results promote the application of emotion recognition in natural environment.
作者
李琳
考希宾
万红
LI Lin;KAO Xi-bin;WAN Hong(China Ordnance Industry Group-machine-environment Key Laboratory,Health Research Institute of Ordnance Industry,Xi'an 710065,China)
出处
《人类工效学》
2021年第5期44-47,共4页
Chinese Journal of Ergonomics
关键词
社交机器人
人工智能
人机交互
情绪识别
语音
EEG
机器学习
多源数据融合
dialogue robot
artificial intelligence
human machine interaction
emotion recognition
speech
EEG
machine learning
multi-source fusion