摘要
文本阅读能力差和视觉推理能力不足是现有视觉问答(visual question answering,VQA)模型效果不好的主要原因,针对以上问题,设计了一个基于图神经网络的多模态推理(multi-modal reasoning graph neural network,MRGNN)模型。利用图像中多种形式的信息帮助理解场景文本内容,将场景文本图片分别预处理成视觉对象图和文本图的形式,并且在问题自注意力模块下过滤多余的信息;使用加入注意力的聚合器完善子图之间相互的节点特征,从而融合不同模态之间的信息,更新后的节点利用不同模态的上下文信息为答疑模块提供了更好的功能。在ST-VQA和TextVQA数据集上验证了有效性,实验结果表明,相比较此任务的一些其他模型,MRGNN模型在此任务上有明显的提升。
Poor text reading ability and inadequate visual reasoning were the main reasons for the insufficient effect of existing visual question answering models.To solve the above problems,this paper designed a MRGNN model.It used various forms of information in images to help understanding the scene text content,preprocessed the scene text image into the visual object graph and text graph respectively,and filtered the redundant information in the question self-attention module.It used an aggregator with attention to perfect the node features between subgraphs and fuse different modality information.The updated nodes used the context information of different modules to provide a better function for answering module.This paper verified the validity of MRGNN model on ST-VQA and TextVQA datasets.The experimental results show that MRGNN model achieves good results compared with some classical models for this task.
作者
张海涛
郭欣雨
Zhang Haitao;Guo Xinyu(School of Software,Liaoning Technical University,Huludao Liaoning 125105,China)
出处
《计算机应用研究》
CSCD
北大核心
2022年第1期280-284,302,共6页
Application Research of Computers
基金
辽宁省自然科学基金面上项目
中国人民解放军总装备部装备预研基金项目。
关键词
视觉问答
图神经网络
多模态推理
问题自注意力
visual question answering
graph neural network
multi-modal reasoning
question self-attention