摘要
As a key link in human-computer interaction,emotion recognition can enable robots to correctly perceive user emotions and provide dynamic and adjustable services according to the emotional needs of different users,which is the key to improve the cognitive level of robot service.Emotion recognition based on facial expression and electrocardiogram has numerous industrial applications.First,three-dimensional convolutional neural network deep learning architecture is utilized to extract the spatial and temporal features from facial expression video data and electrocardiogram(ECG)data,and emotion classification is carried out.Then two modalities are fused in the data level and the decision level,respectively,and the emotion recognition results are then given.Finally,the emotion recognition results of single-modality and multi-modality are compared and analyzed.Through the comparative analysis of the experimental results of single-modality and multi-modality under the two fusion methods,it is concluded that the accuracy rate of multi-modal emotion recognition is greatly improved compared with that of single-modal emotion recognition,and decision-level fusion is easier to operate and more effective than data-level fusion.
出处
《包装工程》
CAS
北大核心
2022年第4期71-79,共9页
Packaging Engineering
基金
supported by the Open Funding Project of National Key Laboratory of Human Factors Engineering(Grant NO.6142222190309)。