Visual Dialog is a multi-modal task involving both computer vision and dialog systems.The goal is to answer multiple questions in conversation style,given an image as the context.Neural networks with attention modules...Visual Dialog is a multi-modal task involving both computer vision and dialog systems.The goal is to answer multiple questions in conversation style,given an image as the context.Neural networks with attention modules are widely used for this task,because of their effectiveness in reasoning the relevance between the texts and images.In this work,we study how to further improve the quality of such reasoning,which is an open challenge.Our baseline is the Recursive Visual Attention(RVA)model,which refines the vision-text attention by iteratively visiting the dialog history.Building on top of that,we propose to improve the attention mechanism with contrastive learning.We train a Matching-Aware Attention Kernel(MAAK)by aligning the deep feature embeddings of an image and its caption,to provide better attention scores.Experiments show consistent improvements from MAAK.In addition,we study the effect of using Multimodal Compact Bilinear(MCB)pooling as a three-way feature fusion for the visual,textual and dialog history embeddings.We analyze the performance of both methods in the discussion section,and propose further ideas to resolve current limitations.展开更多
文摘Visual Dialog is a multi-modal task involving both computer vision and dialog systems.The goal is to answer multiple questions in conversation style,given an image as the context.Neural networks with attention modules are widely used for this task,because of their effectiveness in reasoning the relevance between the texts and images.In this work,we study how to further improve the quality of such reasoning,which is an open challenge.Our baseline is the Recursive Visual Attention(RVA)model,which refines the vision-text attention by iteratively visiting the dialog history.Building on top of that,we propose to improve the attention mechanism with contrastive learning.We train a Matching-Aware Attention Kernel(MAAK)by aligning the deep feature embeddings of an image and its caption,to provide better attention scores.Experiments show consistent improvements from MAAK.In addition,we study the effect of using Multimodal Compact Bilinear(MCB)pooling as a three-way feature fusion for the visual,textual and dialog history embeddings.We analyze the performance of both methods in the discussion section,and propose further ideas to resolve current limitations.