The image emotion classification task aims to use the model to automatically predict the emotional response of people when they see the image.Studies have shown that certain local regions are more likely to inspire an...The image emotion classification task aims to use the model to automatically predict the emotional response of people when they see the image.Studies have shown that certain local regions are more likely to inspire an emotional response than the whole image.However,existing methods perform poorly in predicting the details of emotional regions and are prone to overfitting during training due to the small size of the dataset.Therefore,this study proposes an image emotion classification network based on multilayer attentional interaction and adaptive feature aggregation.To perform more accurate emotional region prediction,this study designs a multilayer attentional interaction module.The module calculates spatial attention maps for higher-layer semantic features and fusion features through amultilayer shuffle attention module.Through layer-by-layer up-sampling and gating operations,the higher-layer features guide the lower-layer features to learn,eventually achieving sentiment region prediction at the optimal scale.To complement the important information lost by layer-by-layer fusion,this study not only adds an intra-layer fusion to the multilayer attention interaction module but also designs an adaptive feature aggregation module.The module uses global average pooling to compress spatial information and connect channel information from all layers.Then,the module adaptively generates a set of aggregated weights through two fully connected layers to augment the original features of each layer.Eventually,the semantics and details of the different layers are aggregated through gating operations and residual connectivity to complement the lost information.To reduce overfitting on small datasets,the network is pre-trained on the FI dataset,and further weight fine-tuning is performed on the small dataset.The experimental results on the FI,Twitter I and Emotion ROI(Region of Interest)datasets show that the proposed network exceeds existing image emotion classification methods,with accuracies of 90.27%,84.66%and 84.96%.展开更多
The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain chara...The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities.展开更多
基金This study was supported,in part,by the National Nature Science Foundation of China under Grant 62272236in part,by the Natural Science Foundation of Jiangsu Province under Grant BK20201136,BK20191401.
文摘The image emotion classification task aims to use the model to automatically predict the emotional response of people when they see the image.Studies have shown that certain local regions are more likely to inspire an emotional response than the whole image.However,existing methods perform poorly in predicting the details of emotional regions and are prone to overfitting during training due to the small size of the dataset.Therefore,this study proposes an image emotion classification network based on multilayer attentional interaction and adaptive feature aggregation.To perform more accurate emotional region prediction,this study designs a multilayer attentional interaction module.The module calculates spatial attention maps for higher-layer semantic features and fusion features through amultilayer shuffle attention module.Through layer-by-layer up-sampling and gating operations,the higher-layer features guide the lower-layer features to learn,eventually achieving sentiment region prediction at the optimal scale.To complement the important information lost by layer-by-layer fusion,this study not only adds an intra-layer fusion to the multilayer attention interaction module but also designs an adaptive feature aggregation module.The module uses global average pooling to compress spatial information and connect channel information from all layers.Then,the module adaptively generates a set of aggregated weights through two fully connected layers to augment the original features of each layer.Eventually,the semantics and details of the different layers are aggregated through gating operations and residual connectivity to complement the lost information.To reduce overfitting on small datasets,the network is pre-trained on the FI dataset,and further weight fine-tuning is performed on the small dataset.The experimental results on the FI,Twitter I and Emotion ROI(Region of Interest)datasets show that the proposed network exceeds existing image emotion classification methods,with accuracies of 90.27%,84.66%and 84.96%.
基金supported by The National Key Research and Development Program of China (2016YFC1306200)the National Natural Science Foundation of China (91132750)+1 种基金Major Projects of the National Social Science Foundation of China (14ZDB161)the Key Research and Development Program of Jiangsu Province, China (BE2016616)
文摘The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities.