期刊文献+

Image Emotion Classification Network Based on Multilayer Attentional Interaction,Adaptive Feature Aggregation

下载PDF
导出
摘要 The image emotion classification task aims to use the model to automatically predict the emotional response of people when they see the image.Studies have shown that certain local regions are more likely to inspire an emotional response than the whole image.However,existing methods perform poorly in predicting the details of emotional regions and are prone to overfitting during training due to the small size of the dataset.Therefore,this study proposes an image emotion classification network based on multilayer attentional interaction and adaptive feature aggregation.To perform more accurate emotional region prediction,this study designs a multilayer attentional interaction module.The module calculates spatial attention maps for higher-layer semantic features and fusion features through amultilayer shuffle attention module.Through layer-by-layer up-sampling and gating operations,the higher-layer features guide the lower-layer features to learn,eventually achieving sentiment region prediction at the optimal scale.To complement the important information lost by layer-by-layer fusion,this study not only adds an intra-layer fusion to the multilayer attention interaction module but also designs an adaptive feature aggregation module.The module uses global average pooling to compress spatial information and connect channel information from all layers.Then,the module adaptively generates a set of aggregated weights through two fully connected layers to augment the original features of each layer.Eventually,the semantics and details of the different layers are aggregated through gating operations and residual connectivity to complement the lost information.To reduce overfitting on small datasets,the network is pre-trained on the FI dataset,and further weight fine-tuning is performed on the small dataset.The experimental results on the FI,Twitter I and Emotion ROI(Region of Interest)datasets show that the proposed network exceeds existing image emotion classification methods,with accuracies of 90.27%,84.66%and 84.96%.
出处 《Computers, Materials & Continua》 SCIE EI 2023年第5期4273-4291,共19页 计算机、材料和连续体(英文)
基金 This study was supported,in part,by the National Nature Science Foundation of China under Grant 62272236 in part,by the Natural Science Foundation of Jiangsu Province under Grant BK20201136,BK20191401.
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部