期刊文献+

基于多通道多步融合的生成式视觉对话模型

Multi-channel multi-step integration model for generative visual dialogue
下载PDF
导出
摘要 当前视觉对话任务在多模态信息融合和推理方面取得了较大进展,但是,在回答一些涉及具有比较明确语义属性和位置空间关系的问题时,主流模型的能力依然有限。比较少的主流模型在正式响应之前能够显式地提供有关图像内容的、语义充分的细粒度表达。视觉特征表示与对话历史、当前问句等文本语义之间缺少必要的、缓解语义鸿沟的桥梁,因此提出一种基于多通道多步融合的视觉对话模型MCMI。该模型显式提供一组关于视觉内容的细粒度语义描述信息,并通过“视觉−语义−对话”历史三者相互作用和多步融合,能够丰富问题的语义表示,实现较为准确的答案解码。在VisDial v0.9/VisDial v1.0数据集中,MCMI模型较基准模型双通道多跳推理模型(DMRM),平均倒数排名(MRR)分别提升了1.95和2.12个百分点,召回率(R@1)分别提升了2.62和3.09个百分点,正确答案平均排名(Mean)分别提升了0.88和0.99;在VisDial v1.0数据集中,较最新模型UTC(Unified Transformer Contrastive learning model),MRR、R@1、Mean分别提升了0.06百分点,0.68百分点和1.47。为了进一步评估生成对话的质量,提出类图灵测试响应通过比例M1和对话质量分数(五分制)M2两个人工评价指标。在VisDial v0.9数据集中,相较于基准模型DMRM,MCMI模型的M1和M2指标分别提高了9.00百分点和0.70。 Visual dialogue task has made significant progress in multimodal information fusion and inference.However,the ability of mainstream models is still limited when answering questions that involve relatively clear semantic attributes and spatial relationships.A relatively few mainstream models can explicitly provide fine-grained semantic representation of image content before formal response.There is a lack of necessary bridges to the semantic gap between visual feature representation and text semantics such as dialogue history and current questions.Therefore,a visual dialogue model based on Multi-Channel and Multi-step Integration(MCMI)was proposed to explicitly provide a set of fine-grained semantic description information about visual content.Through the interactions and multi-step integration among vision,semantics and dialogue history,the semantic representation of questions was enriched and more accurate decoded answers were achieved.On VisDial v0.9/VisDial v1.0 datasets,compared to Dual-channel Multi-hop Reasoning Model(DMRM),the proposed MCMI model improved Mean Reciprocal Ranking(MRR)by 1.95 and 2.12 percentage points respectively,Recall Rate(R@1)by 2.62 and 3.09 percentage points respectively,and Mean ranking of correct answers(Mean)by 0.88 and 0.99 respectively;On VisDial v1.0 dataset,compared to the latest Unified Transformer Contrastive learning model(UTC),MCMI model improved the MRR,R@1,Mean by 0.06 percentage points,0.68 percentage points,and 1.47 respectively.In order to further evaluate the quality of generated dialogue,two subjective indicators are proposed.They are the Turing-test passing proportion M1 and the dialogue quality score(five point scale)M2.When compared with baseline model DMRM in the VisDial v0.9 dataset,MCMI model improved M1 by 9.00 percentage points and M2 by 0.70.
作者 陈思航 江爱文 崔朝阳 王明文 CHEN Sihang;JIANG Aiwen;CUI Zhaoyang;WANG Mingwen(School of Computer and Information Engineering,Jiangxi Normal University,Nanchang Jiangxi 330022,China)
出处 《计算机应用》 CSCD 北大核心 2024年第1期39-46,共8页 journal of Computer Applications
基金 国家自然科学基金资助项目(61966018)。
关键词 视觉对话 生成式任务 视觉语义描述 多步融合 多通道融合 visual dialogue generative task visual semantic description multi-step integration multi-channel fusion
  • 相关文献

参考文献1

共引文献12

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部