摘要
现有基于深度学习的注意力机制的图像描述算法往往会过分关注输入图像中的主要对象,这会导致生成的描述产生细节缺失和单词重复的缺陷。笔者采用视觉自注意力机制来避免模型在不同时间将注意力重复集中在相同内容上。系统首先通过目标检测算法Faster R-CNN获取实体的矩形边界,紧接着提取出各矩形区域及整张图片的特征向量,然后通过视觉自注意力机制处理特征向量得到图像的特征表示,最后将图像特征输入给由双层LSTM组成的语言模型,由语言模型输出图像的自然语言描述。本文选用图像描述领域最大的数据集Microsoft COCO验证设计系统的有效性,实验结果表明基于视觉自注意力机制的图像描述系统能够有效地抓住图像细节,生成通顺的描述语句。
Existing image captioning algorithms based on attention mechanism from deep learning tend to excessively attend to major objects appearing in the input image,which will lead to loss of details and repetition of words in generated captions.In this paper,visual self attention mechanism is adopted to prevent the model from repeatedly focusing on same content at different time steps.Firstly,the object detection algorithm Fast R-CNN is applied in the proposed system to obtain bounding-boxes.Then,feature vectors of each bounding-box and the whole image are extracted and those feature vectors are processed by visual self attention mechanism to get the image feature representation,which are finally fed into the language model of two stacked LSTM layers to output natural language description of the image.The validity of the designed system is verified on the largest dataset Microsoft COCO in the field of image captioning.The experimental results show that image captioning system based on visual self attention mechanism can effectively grasp image details and generate smooth caption.
作者
胡今朝
Hu Jinzhao(School of Electronic Science and Applied Physics,Hefei University of Technology,Hefei Anhui 230009,China)
出处
《信息与电脑》
2020年第17期77-79,共3页
Information & Computer
关键词
深度学习
注意力机制
图像描述
视觉自注意力机制
目标检测
语言模型
deep learning
attention mechanism
image captioning
visual self attention mechanism
object detection
language model