摘要
A multimodal fusion classifier is presented based on neural networks (NNs) learned with hints for automatic spontaneous affect recognition. In case that different channels can provide com- plementary information, features are utilized from four behavioral cues: frontal-view facial expres- sion, profile-view facial expression, shoulder movement, and vocalization (audio). NNs are used in both single cue processing and multimodal fusion. Coarse categories and quadrants in the activation- evaluation dimensional space are utilized respectively as the heuristic information (hints) of NNs during training, aiming at recognition of basic emotions. With the aid of hints, the weights in NNs could learn optimal feature groupings and the subtlety and complexity of spontaneous affective states could be better modeled. The proposed method requires low computation effort and reaches high recognition accuracy, even if the training data is insufficient. Experiment results on the Semaine nat- uralistic dataset demonstrate that our method is effective and promising.
A multimodal fusion classifier is presented based on neural networks (NNs) learned with hints for automatic spontaneous affect recognition. In case that different channels can provide com- plementary information, features are utilized from four behavioral cues: frontal-view facial expres- sion, profile-view facial expression, shoulder movement, and vocalization (audio). NNs are used in both single cue processing and multimodal fusion. Coarse categories and quadrants in the activation- evaluation dimensional space are utilized respectively as the heuristic information (hints) of NNs during training, aiming at recognition of basic emotions. With the aid of hints, the weights in NNs could learn optimal feature groupings and the subtlety and complexity of spontaneous affective states could be better modeled. The proposed method requires low computation effort and reaches high recognition accuracy, even if the training data is insufficient. Experiment results on the Semaine nat- uralistic dataset demonstrate that our method is effective and promising.
基金
Supported by the National Natural Science Foundation of China(60905006)
the Basic Research Fund of Beijing Institute ofTechnology(20120842006)