摘要
自动生成图片描述是自然语言处理和计算机视觉的热点研究话题,要求计算机理解图像语义信息并用人类自然语言的形式进行文字表述.针对当前生成中文图像描述整体质量不高的问题,提出首先利用FastText生成词向量,利用卷积神经网络提取图像全局特征;然后将成对的语句和图像〈S,I〉进行编码,并融合为两者的多模态特征矩阵;最后模型采用多层的长短时记忆网络对多模态特征矩阵进行解码,并通过计算余弦相似度得到解码的结果.通过对比发现所提模型在双语评估研究(BLEU)指标上优于其他模型,生成的中文描述可以准确概括图像的语义信息.
Automatic image captioning is a hot topic which connects natural language processing and computer vision.It mainly completes the task of understanding image semantic information and expressing it in the form of human natural language.For the overall quality of Chinese image captioning is not very high,this study uses FastText to generate word vector,uses convolution neural network to extract the global features of the image,then encodes the pairs of sentences and images〈S,I〉,and finally merges them into a feature matrix containing both Chinese description and image information.Decoder uses LSTM model to decode the feature matrix,and obtains the decoding result by calculating cosine similarity.Through comparison,we find that the model proposed in this study is better than other models in BiLingual Evaluation Understudy(BLEU).The Chinese description generated by the model can accurately summarize the semantic information of the image.
作者
陈兴
CHEN Xing(College of Computer and Information,Hohai University,Nanjing 211100,China)
出处
《计算机系统应用》
2020年第9期191-197,共7页
Computer Systems & Applications