期刊文献+

基于Transformer网络多模态融合的密集视频描述方法

Dense Video Description Method Based on Multi-modal Fusion in Transformer Network
下载PDF
导出
摘要 针对目前的密集视频描述模型大多使用两阶段的方法存在效率较低、忽略音频及语义信息,描述结果不全面的问题。提出了一种基于Transformer网络多模态和语义信息融合的密集视频描述方法。提取自适应R(2+1)D网络提取视觉特征,设计了语义探测器生成语义信息,加入音频特征进行补充,建立了多尺度可变形注意力模块,应用并行的预测头,加快模型收敛速度,提高模型精度。实验结果表明:模型在2个基准数据集上性能均有很好的表现,评价指标BLEU4上达到了2.17。 In order to solve the problems that most of the current dense video description models use two-stage methods,which have low efficiency,ignore audio and semantic information,and have incomplete description results,a multi-modal and semantic information fusion dense video description method was proposed.An adaptive R(2+1)D network was proposed to extract visual features,a semantic detector was designed to generate semantic information,audio features were added to supplement it,a multi-scale deformable attention module was established,and a parallel prediction head was applied to accelerate the convergence rate and improve the accuracy of the model.The experimental results show that the model has good performance on the two benchmark datasets,and the evaluation index BLEU4 reaches 2.17.
作者 李想 桑海峰 Li Xiang;Sang Haifeng(School of Information Science and Engineering,Shenyang University of Technology,Shenyang 110870,China)
出处 《系统仿真学报》 CAS CSCD 北大核心 2024年第5期1061-1071,共11页 Journal of System Simulation
基金 国家自然科学基金(62173078) 辽宁省自然科学基金(2022-MS-268)。
关键词 密集事件描述 Transformer网络 语义信息 多模态融合 可变形注意力 dense event description Transformer network semantic information multi-modal fusion deformable attention
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部