Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.W...Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.We use transfer learning to improve its performance with pretrained models on largescale data.Audio is encoded using deep speech recognition networks with 500 hours’speech and video is encoded using convolutional neural networks with over 110,000 images.The extracted audio and visual features are fed into Long Short-Term Memory to train models respectively.Logistic regression and ensemble method are performed in decision level fusion.The experiment results indicate that 1)audio features extracted from deep speech recognition networks achieve better performance than handcrafted audio features;2)the visual emotion recognition obtains better performance than audio emotion recognition;3)the ensemble method gets better performance than logistic regression and prior knowledge from micro-F1 value further improves the performance and robustness,achieving accuracy of 67.00%for“happy”,54.90%for“an?gry”,and 51.69%for“sad”.展开更多
文摘Due to the lack of large-scale emotion databases,it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning,which has made great progress in other areas.We use transfer learning to improve its performance with pretrained models on largescale data.Audio is encoded using deep speech recognition networks with 500 hours’speech and video is encoded using convolutional neural networks with over 110,000 images.The extracted audio and visual features are fed into Long Short-Term Memory to train models respectively.Logistic regression and ensemble method are performed in decision level fusion.The experiment results indicate that 1)audio features extracted from deep speech recognition networks achieve better performance than handcrafted audio features;2)the visual emotion recognition obtains better performance than audio emotion recognition;3)the ensemble method gets better performance than logistic regression and prior knowledge from micro-F1 value further improves the performance and robustness,achieving accuracy of 67.00%for“happy”,54.90%for“an?gry”,and 51.69%for“sad”.