期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
TECMH:Transformer-Based Cross-Modal Hashing For Fine-Grained Image-Text Retrieval
1
作者 Qiqi Li Longfei Ma +2 位作者 Zheng Jiang Mingyong Li Bo Jin 《Computers, Materials & Continua》 SCIE EI 2023年第5期3713-3728,共16页
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm... In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field. 展开更多
关键词 Deep learning cross-modal retrieval hash learning TRANSFORMER
下载PDF
ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval
2
作者 Mingyong Li Qiqi Li +1 位作者 Zheng Jiang Yan Ma 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1401-1414,共14页
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)... In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance. 展开更多
关键词 Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
下载PDF
Adequate alignment and interaction for cross-modal retrieval
3
作者 Mingkang WANG Min MENG +1 位作者 Jigang LIU Jigang WU 《Virtual Reality & Intelligent Hardware》 EI 2023年第6期509-522,共14页
Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing... Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods. 展开更多
关键词 cross-modal retrieval Visual semantic embedding Feature aggregation Transformer
下载PDF
Cross-Modal Hashing Retrieval Based on Deep Residual Network
4
作者 Zhiyi Li Xiaomian Xu +1 位作者 Du Zhang Peng Zhang 《Computer Systems Science & Engineering》 SCIE EI 2021年第2期383-405,共23页
In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and un... In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes:A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network(CMHR-DRN).The model construction is divided into two stages:The first stage is the feature extraction of different modal data,including the use of Deep Residual Network(DRN)to extract the image features,using the method of combining TF-IDF with the full connection network to extract the text features,and the obtained image and text features used as the input of the second stage.In the second stage,the image and text features are mapped into Hash functions by supervised learning,and the image and text features are mapped to the common binary Hamming space.In the process of mapping,the distance measurement of the original distance measurement and the common feature space are kept unchanged as far as possible to improve the accuracy of Cross-Modal Retrieval.In training the model,adaptive moment estimation(Adam)is used to calculate the adaptive learning rate of each parameter,and the stochastic gradient descent(SGD)is calculated to obtain the minimum loss function.The whole training process is completed on Caffe deep learning framework.Experiments show that the proposed algorithm CMHR-DRN based on Deep Residual Network has better retrieval performance and stronger advantages than other Cross-Modal algorithms CMFH,CMDN and CMSSH. 展开更多
关键词 Deep residual network cross-modal retrieval HASHING cross-modal hashing retrieval based on deep residual network
下载PDF
Robust cross-modal retrieval with alignment refurbishment
5
作者 Jinyi GUO Jieyu DING 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第10期1403-1415,共13页
Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieve... Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieved excellent results;however,these are trained with clean cross-modal pairs,which are semantically matched but costly,compared with easily available data with noise alignment(i.e.,paired but mismatched in semantics).When training these methods with noise-aligned data,the performance degrades dramatically.Therefore,we propose a robust cross-modal retrieval with alignment refurbishment(RCAR),which significantly reduces the impact of noise on the model.Specifically,RCAR first conducts multi-task learning to slow down the overfitting to the noise to make data separable.Then,RCAR uses a two-component beta-mixture model to divide them into clean and noise alignments and refurbishes the label according to the posterior probability of the noise-alignment component.In addition,we define partial and complete noises in the noise-alignment paradigm.Experimental results show that,compared with the popular cross-modal retrieval methods,RCAR achieves more robust performance with both types of noise. 展开更多
关键词 cross-modal retrieval Robust learning Alignment correction Beta-mixture model
原文传递
基于模态语义增强的跨模态食谱检索方法
6
作者 李明 周栋 +1 位作者 雷芳 曹步清 《计算机应用研究》 CSCD 北大核心 2024年第4期1131-1137,共7页
在跨模态食谱检索任务中,如何有效地对模态进行特征表示是一个热点问题。目前一般使用两个独立的神经网络分别获取图像和食谱的特征,通过跨模态对齐实现跨模态检索。但这些方法主要关注模态内的特征信息,忽略了模态间的特征交互,导致部... 在跨模态食谱检索任务中,如何有效地对模态进行特征表示是一个热点问题。目前一般使用两个独立的神经网络分别获取图像和食谱的特征,通过跨模态对齐实现跨模态检索。但这些方法主要关注模态内的特征信息,忽略了模态间的特征交互,导致部分有效模态信息丢失。针对该问题,提出一种通过多模态编码器来增强模态语义的跨模态食谱检索方法。首先使用预训练模型提取图像和食谱的初始语义特征,并借助对抗损失缩小模态间差异;然后利用成对跨模态注意力使来自一个模态的特征反复强化另一个模态的特征,进一步提取有效信息;接着采用自注意力机制对模态的内部特征进行建模,以捕捉丰富的模态特定语义信息和潜在关联知识;最后,引入三元组损失最小化同类样本间的距离,实现跨模态检索学习。在Recipe 1M数据集上的实验结果表明,该方法在中位数排名(MedR)和前K召回率(R@K)等方面均优于目前的主流方法,为跨模态检索任务提供了有力的解决方案。 展开更多
关键词 跨模态食谱检索 特征提取 模态语义增强 多模态编码器
下载PDF
DI-VTR:Dual inter-modal interaction model for video-text retrieval
7
作者 Jie Guo Mengying Wang +2 位作者 Wenwei Wang Yan Zhou Bin Song 《Journal of Information and Intelligence》 2024年第5期388-403,共16页
Video-text retrieval is a challenging task for multimodal information processing due to the semantic gap between different modalities.However,most existing methods do not fully mine the intra-modal interactions,as wit... Video-text retrieval is a challenging task for multimodal information processing due to the semantic gap between different modalities.However,most existing methods do not fully mine the intra-modal interactions,as with the temporal correlation of video frames,which results in poor matching performance.Additionally,the imbalanced semantic information between videos and texts also leads to difficulty in the alignment of the two modalities.To this end,we propose a dual inter-modal interaction network for video-text retrieval,i.e.,DI-vTR.To learn the intra-modal interaction of video frames,we design a contextual-related video encoder to obtain more fine-grained content-oriented video representations.We also propose a dual inter-modal interaction module to accomplish accurate multilingual alignment between the video and text modalities by introducing multilingual text to improve the representation ability of text semantic features.Extensive experimental results on commonly-used video-text retrieval datasets,including MSR-VTT,MSVD and VATEX,show that the proposed method achieves significantly improved performance compared with state-of-the-art methods. 展开更多
关键词 Video-text retrieval Multilingual text Dual interaction Contrastivelanguage-image pretraining(CLIP) cross-modal retrieval
原文传递
融合自注意力机制的跨模态食谱检索方法 被引量:4
8
作者 林阳 初旭 +2 位作者 王亚沙 毛维嘉 赵俊峰 《计算机科学与探索》 CSCD 北大核心 2020年第9期1471-1481,共11页
饮食记录是饮食管理的关键环节。为了简化记录过程,研究者提出了基于食物图片的食谱检索技术,通过拍摄的图片检索到对应食谱,并据此生成营养信息,从而提高了记录的便捷性。食谱检索是典型的跨模态检索问题,但与一般问题相比,其主要难点... 饮食记录是饮食管理的关键环节。为了简化记录过程,研究者提出了基于食物图片的食谱检索技术,通过拍摄的图片检索到对应食谱,并据此生成营养信息,从而提高了记录的便捷性。食谱检索是典型的跨模态检索问题,但与一般问题相比,其主要难点是食谱描述了从原材料到成品的一系列变化过程,而非直接可见的特征,因此模型需要深入理解原材料的处理过程。而当前食谱检索研究工作采用线性方式处理文本,导致其捕捉食谱处理过程中的远距离依赖现象的能力较差。针对这个问题,设计了一种基于自注意力机制的跨模态食谱检索模型。该模型借助Transformer模型中的自注意力机制,捕捉食谱中远距离的依赖关系,同时改进了传统方法中的注意力机制,可以更好地挖掘食谱中的语义。实验结果表明,该模型在食谱检索任务的召回率上比基线方法提高了22%。 展开更多
关键词 饮食记录 食谱检索 自注意力机制 跨模态 深度神经网络
下载PDF
Multi-Task Visual Semantic Embedding Network for Image-Text Retrieval
9
作者 Xue-Yang Qin Li-Shuang Li +3 位作者 Jing-Yao Tang Fei Hao Mei-Ling Ge Guang-Yao Pang 《Journal of Computer Science & Technology》 SCIE EI 2024年第4期811-826,共16页
Image-text retrieval aims to capture the semantic correspondence between images and texts,which serves as a foundation and crucial component in multi-modal recommendations,search systems,and online shopping.Existing m... Image-text retrieval aims to capture the semantic correspondence between images and texts,which serves as a foundation and crucial component in multi-modal recommendations,search systems,and online shopping.Existing mainstream methods primarily focus on modeling the association of image-text pairs while neglecting the advantageous impact of multi-task learning on image-text retrieval.To this end,a multi-task visual semantic embedding network(MVSEN)is proposed for image-text retrieval.Specifically,we design two auxiliary tasks,including text-text matching and multi-label classification,for semantic constraints to improve the generalization and robustness of visual semantic embedding from a training perspective.Besides,we present an intra-and inter-modality interaction scheme to learn discriminative visual and textual feature representations by facilitating information flow within and between modalities.Subsequently,we utilize multi-layer graph convolutional networks in a cascading manner to infer the correlation of image-text pairs.Experimental results show that MVSEN outperforms state-of-the-art methods on two publicly available datasets,Flickr30K and MSCOCO,with rSum improvements of 8.2%and 3.0%,respectively. 展开更多
关键词 image-text retrieval cross-modal retrieval multi-task learning graph convolutional network
原文传递
Multimodal Pretraining from Monolingual to Multilingual 被引量:1
10
作者 Liang Zhang Ludan Ruan +1 位作者 Anwen Hu Qin Jin 《Machine Intelligence Research》 EI CSCD 2023年第2期220-232,共13页
Multimodal pretraining has made convincing achievements in various downstream tasks in recent years.However,since the majority of the existing works construct models based on English,their applications are limited by ... Multimodal pretraining has made convincing achievements in various downstream tasks in recent years.However,since the majority of the existing works construct models based on English,their applications are limited by language.In this work,we address this issue by developing models with multimodal and multilingual capabilities.We explore two types of methods to extend multimodal pretraining model from monolingual to multilingual.Specifically,we propose a pretraining-based model named multilingual multimodal pretraining(MLMM),and two generalization-based models named multilingual CLIP(M-CLIP)and multilingual acquisition(MLA).In addition,we further extend the generalization-based models to incorporate the audio modality and develop the multilingual CLIP for vision,language,and audio(CLIP4VLA).Our models achieve state-of-the-art performances on multilingual vision-text retrieval,visual question answering,and image captioning benchmarks.Based on the experimental results,we discuss the pros and cons of the two types of models and their potential practical applications. 展开更多
关键词 Multilingual pretraining multimodal pretraining cross-lingual transfer multilingual generation cross-modal retrieval
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部