期刊文献+

二次检索

题名
关键词
文摘
作者
第一作者
机构
刊名
分类号
参考文献
作者简介
基金资助
栏目信息

年份

共找到1篇文章
< 1 >
每页显示 20 50 100
Coreference resolution helps visual dialogs to focus
1
作者 tianwei yue Wenping Wang +3 位作者 Chen Liang Dachi Chen Congrui Hetang Xuewei Wang 《High-Confidence Computing》 EI 2024年第2期129-135,共7页
Visual Dialog is a multi-modal task involving both computer vision and dialog systems.The goal is to answer multiple questions in conversation style,given an image as the context.Neural networks with attention modules... Visual Dialog is a multi-modal task involving both computer vision and dialog systems.The goal is to answer multiple questions in conversation style,given an image as the context.Neural networks with attention modules are widely used for this task,because of their effectiveness in reasoning the relevance between the texts and images.In this work,we study how to further improve the quality of such reasoning,which is an open challenge.Our baseline is the Recursive Visual Attention(RVA)model,which refines the vision-text attention by iteratively visiting the dialog history.Building on top of that,we propose to improve the attention mechanism with contrastive learning.We train a Matching-Aware Attention Kernel(MAAK)by aligning the deep feature embeddings of an image and its caption,to provide better attention scores.Experiments show consistent improvements from MAAK.In addition,we study the effect of using Multimodal Compact Bilinear(MCB)pooling as a three-way feature fusion for the visual,textual and dialog history embeddings.We analyze the performance of both methods in the discussion section,and propose further ideas to resolve current limitations. 展开更多
关键词 Multi-model machine learning Visual dialog Co-reference resolution
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部