A personalized emotion space is proposed to bridge the "affective gap" in video affective content understanding. In order to unify the discrete and dimensional emotion model, fuzzy C-mean (FCM) clustering algorith...A personalized emotion space is proposed to bridge the "affective gap" in video affective content understanding. In order to unify the discrete and dimensional emotion model, fuzzy C-mean (FCM) clustering algorithm is adopted to divide the emotion space. Gaussian mixture model (GMM) is used to determine the membership functions of typical affective subspaces. At every step of modeling the space, the inputs rely completely on the affective experiences recorded by the audiences. The advantages of the improved V-A (Velance-Arousal) emotion model are the per- sonalization, the ability to define typical affective state areas in the V-A emotion space, and the convenience to explicitly express the intensity of each affective state. The experimental results validate the model and show it can be used as a personalized emotion space for video affective content representation.展开更多
基金Supported by the National Natural Science Foundation of China (60703049)the "Chenguang" Foundation for Young Scientists (200850731353)the National Postdoctoral Foundation of China (20060400847)
文摘A personalized emotion space is proposed to bridge the "affective gap" in video affective content understanding. In order to unify the discrete and dimensional emotion model, fuzzy C-mean (FCM) clustering algorithm is adopted to divide the emotion space. Gaussian mixture model (GMM) is used to determine the membership functions of typical affective subspaces. At every step of modeling the space, the inputs rely completely on the affective experiences recorded by the audiences. The advantages of the improved V-A (Velance-Arousal) emotion model are the per- sonalization, the ability to define typical affective state areas in the V-A emotion space, and the convenience to explicitly express the intensity of each affective state. The experimental results validate the model and show it can be used as a personalized emotion space for video affective content representation.