期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Cross-Modal Entity Resolution for Image and Text Integrating Global and Fine-Grained Joint Attention Mechanism
1
作者 曾志贤 曹建军 +2 位作者 翁年凤 袁震 余旭 《Journal of Shanghai Jiaotong university(Science)》 EI 2023年第6期728-737,共10页
In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity res... In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity resolution for image and text integrating global and fine-grained joint attention mechanism method.First,we map the cross-modal data to a common embedding space utilizing a feature extraction network.Then,we integrate global joint attention mechanism and fine-grained joint attention mechanism,making the model have the ability to learn the global semantic characteristics and the local fine-grained semantic characteristics of the cross-modal data,which is used to fully exploit the cross-modal semantic correlation and boost the performance of cross-modal entity resolution.Moreover,experiments on Flickr-30K and MS-COCO datasets show that the overall performance of R@sum outperforms by 4.30%and 4.54%compared with 5 state-of-the-art methods,respectively,which can fully demonstrate the superiority of our proposed method. 展开更多
关键词 cross-modal entity resolution joint attention mechanism deep learning feature extraction semantic correlation
原文传递
Imaginary Conversations
2
作者 Mark Turner 《Journal of Foreign Languages and Cultures》 2017年第1期93-99,共7页
A range of multimodal form-meaning pairs has arisen to prompt for the generic integration template called blended classic joint attention(BCJA).This article presents examples and principles.
关键词 BLENDING conceptual integration MULTIMODAL CONSTRUCTIONS joint attention
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部