期刊文献+
共找到581篇文章
< 1 2 30 >
每页显示 20 50 100
TECMH:Transformer-Based Cross-Modal Hashing For Fine-Grained Image-Text Retrieval
1
作者 Qiqi Li Longfei Ma +2 位作者 Zheng Jiang Mingyong Li Bo Jin 《Computers, Materials & Continua》 SCIE EI 2023年第5期3713-3728,共16页
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm... In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field. 展开更多
关键词 Deep learning cross-modal retrieval hash learning TRANSFORMER
下载PDF
ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval
2
作者 Mingyong Li Qiqi Li +1 位作者 Zheng Jiang Yan Ma 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1401-1414,共14页
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)... In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance. 展开更多
关键词 Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
下载PDF
Adequate alignment and interaction for cross-modal retrieval
3
作者 Mingkang WANG Min MENG +1 位作者 Jigang LIU Jigang WU 《Virtual Reality & Intelligent Hardware》 EI 2023年第6期509-522,共14页
Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing... Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods. 展开更多
关键词 cross-modal retrieval Visual semantic embedding Feature aggregation Transformer
下载PDF
A Sentence Retrieval Generation Network Guided Video Captioning
4
作者 Ou Ye Mimi Wang +3 位作者 Zhenhua Yu Yan Fu Shun Yi Jun Deng 《Computers, Materials & Continua》 SCIE EI 2023年第6期5675-5696,共22页
Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide... Currently,the video captioning models based on an encoder-decoder mainly rely on a single video input source.The contents of video captioning are limited since few studies employed external corpus information to guide the generation of video captioning,which is not conducive to the accurate descrip-tion and understanding of video content.To address this issue,a novel video captioning method guided by a sentence retrieval generation network(ED-SRG)is proposed in this paper.First,a ResNeXt network model,an efficient convolutional network for online video understanding(ECO)model,and a long short-term memory(LSTM)network model are integrated to construct an encoder-decoder,which is utilized to extract the 2D features,3D features,and object features of video data respectively.These features are decoded to generate textual sentences that conform to video content for sentence retrieval.Then,a sentence-transformer network model is employed to retrieve different sentences in an external corpus that are semantically similar to the above textual sentences.The candidate sentences are screened out through similarity measurement.Finally,a novel GPT-2 network model is constructed based on GPT-2 network structure.The model introduces a designed random selector to randomly select predicted words with a high probability in the corpus,which is used to guide and generate textual sentences that are more in line with human natural language expressions.The proposed method in this paper is compared with several existing works by experiments.The results show that the indicators BLEU-4,CIDEr,ROUGE_L,and METEOR are improved by 3.1%,1.3%,0.3%,and 1.5%on a public dataset MSVD and 1.3%,0.5%,0.2%,1.9%on a public dataset MSR-VTT respectively.It can be seen that the proposed method in this paper can generate video captioning with richer semantics than several state-of-the-art approaches. 展开更多
关键词 video captioning encoder-decoder sentence retrieval external corpus RS GPT-2 network model
下载PDF
Cross-Modal Hashing Retrieval Based on Deep Residual Network
5
作者 Zhiyi Li Xiaomian Xu +1 位作者 Du Zhang Peng Zhang 《Computer Systems Science & Engineering》 SCIE EI 2021年第2期383-405,共23页
In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and un... In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes:A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network(CMHR-DRN).The model construction is divided into two stages:The first stage is the feature extraction of different modal data,including the use of Deep Residual Network(DRN)to extract the image features,using the method of combining TF-IDF with the full connection network to extract the text features,and the obtained image and text features used as the input of the second stage.In the second stage,the image and text features are mapped into Hash functions by supervised learning,and the image and text features are mapped to the common binary Hamming space.In the process of mapping,the distance measurement of the original distance measurement and the common feature space are kept unchanged as far as possible to improve the accuracy of Cross-Modal Retrieval.In training the model,adaptive moment estimation(Adam)is used to calculate the adaptive learning rate of each parameter,and the stochastic gradient descent(SGD)is calculated to obtain the minimum loss function.The whole training process is completed on Caffe deep learning framework.Experiments show that the proposed algorithm CMHR-DRN based on Deep Residual Network has better retrieval performance and stronger advantages than other Cross-Modal algorithms CMFH,CMDN and CMSSH. 展开更多
关键词 Deep residual network cross-modal retrieval HASHING cross-modal hashing retrieval based on deep residual network
下载PDF
Real-time and Automatic Close-up Retrieval from Compressed Videos
6
作者 Ying Weng Jianmin Jiang 《International Journal of Automation and computing》 EI 2008年第2期198-201,共4页
This paper proposes a thorough scheme, by virtue of camera zooming descriptor with two-level threshold, to automatically retrieve close-ups directly from moving picture experts group (MPEG) compressed videos based o... This paper proposes a thorough scheme, by virtue of camera zooming descriptor with two-level threshold, to automatically retrieve close-ups directly from moving picture experts group (MPEG) compressed videos based on camera motion analysis. A new algorithm for fast camera motion estimation in compressed domain is presented. In the retrieval process, camera-motion-based semantic retrieval is built. To improve the coverage of the proposed scheme, close-up retrieval in all kinds of videos is investigated. Extensive experiments illustrate that the proposed scheme provides promising retrieval results under real-time and automatic application scenario. 展开更多
关键词 Camera motion analysis close-up retrieval moving picture experts group (MPEG) compressed videos
下载PDF
Automated neurosurgical video segmentation and retrieval system
7
作者 Engin Mendi Songul Cecen +1 位作者 Emre Ermisoglu Coskun Bayrak 《Journal of Biomedical Science and Engineering》 2010年第6期618-624,共7页
Medical video repositories play important roles for many health-related issues such as medical imaging, medical research and education, medical diagnostics and training of medical professionals. Due to the increasing ... Medical video repositories play important roles for many health-related issues such as medical imaging, medical research and education, medical diagnostics and training of medical professionals. Due to the increasing availability of the digital video data, indexing, annotating and the retrieval of the information are crucial. Since performing these processes are both computationally expensive and time consuming, automated systems are needed. In this paper, we present a medical video segmentation and retrieval research initiative. We describe the key components of the system including video segmentation engine, image retrieval engine and image quality assessment module. The aim of this research is to provide an online tool for indexing, browsing and retrieving the neurosurgical videotapes. This tool will allow people to retrieve the specific information in a long video tape they are interested in instead of looking through the entire content. 展开更多
关键词 video Processing video SUMMARIZATION video SEGMENTATION IMAGE retrieval IMAGE Quality Assessment
下载PDF
Semantic-Based Video Retrieval Survey
8
作者 Shaimaa Toriah Mohamed Toriah Atef Zaki Ghalwash Aliaa A. A. Youssif 《Journal of Computer and Communications》 2018年第8期28-44,共17页
There is a tremendous growth of digital data due to the stunning progress of digital devices which facilitates capturing them. Digital data include image, text, and video. Video represents a rich source of information... There is a tremendous growth of digital data due to the stunning progress of digital devices which facilitates capturing them. Digital data include image, text, and video. Video represents a rich source of information. Thus, there is an urgent need to retrieve, organize, and automate videos. Video retrieval is a vital process in multimedia applications such as video search engines, digital museums, and video-on-demand broadcasting. In this paper, the different approaches of video retrieval are outlined and briefly categorized. Moreover, the different methods that bridge the semantic gap in video retrieval are discussed in more details. 展开更多
关键词 SEMANTIC video retrieval CONCEPT Detectors CONTEXT Based CONCEPT FUSION SEMANTIC GAP
下载PDF
Similar Video Retrieval via Order-Aware Exemplars and Alignment
9
作者 Teruki Horie Masato Uchida Yasuo Matsuyama 《Journal of Signal and Information Processing》 2018年第2期73-91,共19页
In this paper, we present machine learning algorithms and systems for similar video retrieval. Here, the query is itself a video. For the similarity measurement, exemplars, or representative frames in each video, are ... In this paper, we present machine learning algorithms and systems for similar video retrieval. Here, the query is itself a video. For the similarity measurement, exemplars, or representative frames in each video, are extracted by unsupervised learning. For this learning, we chose the order-aware competitive learning. After obtaining a set of exemplars for each video, the similarity is computed. Because the numbers and positions of the exemplars are different in each video, we use a similarity computing method called M-distance, which generalizes existing global and local alignment methods using followers to the exemplars. To represent each frame in the video, this paper emphasizes the Frame Signature of the ISO/IEC standard so that the total system, along with its graphical user interface, becomes practical. Experiments on the detection of inserted plagiaristic scenes showed excellent precision-recall curves, with precision values very close to 1. Thus, the proposed system can work as a plagiarism detector for videos. In addition, this method can be regarded as the structuring of unstructured data via numerical labeling by exemplars. Finally, further sophistication of this labeling is discussed. 展开更多
关键词 Similar video retrieval EXEMPLAR Learning M-Distance Sequence ALIGNMENT Data STRUCTURING
下载PDF
Dynamic Hyperlinker: Innovative Solution for 3D Video Content Search and Retrieval
10
作者 Mohammad Rafiq Swash Amar Aggoun +1 位作者 Obaidullah Abdul Fatah Bei Li 《Journal of Computer and Communications》 2016年第6期10-23,共14页
Recently, 3D display technology, and content creation tools have been undergone rigorous development and as a result they have been widely adopted by home and professional users. 3D digital repositories are increasing... Recently, 3D display technology, and content creation tools have been undergone rigorous development and as a result they have been widely adopted by home and professional users. 3D digital repositories are increasing and becoming available ubiquitously. However, searching and visualizing 3D content remains a great challenge. In this paper, we propose and present the development of a novel approach for creating hypervideos, which ease the 3D content search and retrieval. It is called the dynamic hyperlinker for 3D content search and retrieval process. It advances 3D multimedia navigability and searchability by creating dynamic links for selectable and clickable objects in the video scene whilst the user consumes the 3D video clip. The proposed system involves 3D video processing, such as detecting/tracking clickable objects, annotating objects, and metadata engineering including 3D content descriptive protocol. Such system attracts the attention from both home and professional users and more specifically broadcasters and digital content providers. The experiment is conducted on full parallax holoscopic 3D videos “also known as integral images”. 展开更多
关键词 Holoscopic 3D Image Integral Image 3D video 3D Display video Search and retrieval Hyperlinker Hypervideo
下载PDF
Sign Language Video Retrieval Based on Trajectory
11
作者 Shilin Zhang Mei Gu 《通讯和计算机(中英文版)》 2010年第9期32-35,共4页
关键词 基于内容的视频检索 手语 编辑距离 距离算法 颜色直方图 字符串 修正方法 内存空间
下载PDF
Sign Video Retrieval under Complex Background
12
作者 Shilin Zhang Mei Gu 《通讯和计算机(中英文版)》 2010年第8期14-19,共6页
关键词 视频检索系统 复杂背景 隐马尔可夫模型 HMM模型 手语识别 搜索问题 动态特性 运动特征
下载PDF
Video Retrieval Using Color and Spatial Information of Human Appearance
13
作者 Sofina Yakhu Nikom Suvonvorn 《通讯和计算机(中英文版)》 2012年第6期636-643,共8页
关键词 基于内容的视频检索 外观颜色 空间信息 人性化 视频监控系统 目标搜索 视频数据 VR系统
下载PDF
Robust cross-modal retrieval with alignment refurbishment
14
作者 Jinyi GUO Jieyu DING 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第10期1403-1415,共13页
Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieve... Cross-modal retrieval tries to achieve mutual retrieval between modalities by establishing consistent alignment for different modal data.Currently,many cross-modal retrieval methods have been proposed and have achieved excellent results;however,these are trained with clean cross-modal pairs,which are semantically matched but costly,compared with easily available data with noise alignment(i.e.,paired but mismatched in semantics).When training these methods with noise-aligned data,the performance degrades dramatically.Therefore,we propose a robust cross-modal retrieval with alignment refurbishment(RCAR),which significantly reduces the impact of noise on the model.Specifically,RCAR first conducts multi-task learning to slow down the overfitting to the noise to make data separable.Then,RCAR uses a two-component beta-mixture model to divide them into clean and noise alignments and refurbishes the label according to the posterior probability of the noise-alignment component.In addition,we define partial and complete noises in the noise-alignment paradigm.Experimental results show that,compared with the popular cross-modal retrieval methods,RCAR achieves more robust performance with both types of noise. 展开更多
关键词 cross-modal retrieval Robust learning Alignment correction Beta-mixture model
原文传递
Inference and retrieval of soccer event
15
作者 SUN Xing-hua YANG Jing-yu 《通讯和计算机(中英文版)》 2007年第3期18-32,共15页
关键词 英式足球比赛 视频提取 语境 贝氏网络 用户定义
下载PDF
基于互信息量均方差提取关键帧的激光视频图像检索研究
16
作者 胡秀 王书爱 《激光杂志》 CAS 北大核心 2024年第3期145-149,共5页
为保证激光视频图像检索结果中不存在重复性冗余图像,提出了基于互信息量均方差提取关键帧的激光视频图像检索方法。基于互信息量均方差的关键帧提取方法,以激光视频图像颜色的互信息量均方差最大化,为激光视频图像关键帧的聚类中心设... 为保证激光视频图像检索结果中不存在重复性冗余图像,提出了基于互信息量均方差提取关键帧的激光视频图像检索方法。基于互信息量均方差的关键帧提取方法,以激光视频图像颜色的互信息量均方差最大化,为激光视频图像关键帧的聚类中心设置标准,以此聚类提取不重复的视频图像关键帧;通过基于关键帧的激光视频图像检索方法,将所提取关键帧作为激光视频图像检索的核心判断内容,提取与所需图像关键帧相似度显著的激光视频图像,完成激光视频图像检索。实验结果显示:此方法使用后,提取的激光视频图像关键帧冗余度仅有0.01,激光视频图像检索结果的MAP指标测试值高达0.98,检索结果中不存在重复性冗余图像。 展开更多
关键词 互信息量 均方差 提取关键帧 激光视频 图像检索 聚类算法
下载PDF
NewsVideoCAR:一个基于内容的视频新闻节目浏览检索系统 被引量:3
17
作者 熊华 老松杨 +3 位作者 吴玲琦 李恒峰 吴玲达 李国辉 《计算机工程》 CAS CSCD 北大核心 2000年第11期73-75,共3页
介绍了NewsVideoCAR系统的构成,核心技术的基本思想和浏览界面的设计要点.
关键词 NewsvideoCAR 电视新闻节目 节目浏览检索系统
下载PDF
面向跨模态检索的查询感知双重对比学习网络
18
作者 尹梦冉 梁美玉 +3 位作者 于洋 曹晓雯 杜军平 薛哲 《软件学报》 EI CSCD 北大核心 2024年第5期2120-2132,共13页
近期,跨模态视频语料库时刻检索(VCMR)这一新任务被提出,它的目标是从未分段的视频语料库中检索出与查询语句相对应的一小段视频片段.现有的跨模态视频文本检索工作的关键点在于不同模态特征的对齐和融合,然而,简单地执行跨模态对齐和... 近期,跨模态视频语料库时刻检索(VCMR)这一新任务被提出,它的目标是从未分段的视频语料库中检索出与查询语句相对应的一小段视频片段.现有的跨模态视频文本检索工作的关键点在于不同模态特征的对齐和融合,然而,简单地执行跨模态对齐和融合不能确保来自相同模态且语义相似的数据在联合特征空间下保持接近,也未考虑查询语句的语义.为了解决上述问题,提出一种面向多模态视频片段检索的查询感知跨模态双重对比学习网络(QACLN),该网络通过结合模态间和模态内的双重对比学习来获取不同模态数据的统一语义表示.具体地,提出一种查询感知的跨模态语义融合策略,根据感知到的查询语义自适应地融合视频的视觉模态特征和字幕模态特征等多模态特征,获得视频的查询感知多模态联合表示.此外,提出一种面向视频和查询语句的模态间及模态内双重对比学习机制,以增强不同模态的语义对齐和融合,从而提高不同模态数据表示的可分辨性和语义一致性.最后,采用一维卷积边界回归和跨模态语义相似度计算来完成时刻定位和视频检索.大量实验验证表明,所提出的QACLN优于基准方法. 展开更多
关键词 跨模态语义融合 跨模态检索 视频时刻定位 对比学习
下载PDF
Pano Video:摄像机运动建模及从视频估计摄像机运动参数的一种方法 被引量:6
19
作者 张茂军 胡晓峰 库锡树 《中国图象图形学报(A辑)》 CSCD 1997年第8期623-628,共6页
通过给摄像机平移、旋转与变焦等运动建模,并把运动模型与基于象素点亮度变化的方法相结合来估计摄像机运动参数。用得到的运动参数可以把视频构造成一幅全景图,全景图可广泛应用于视频压缩与检索。实验表明,该方法可成功地应用于视... 通过给摄像机平移、旋转与变焦等运动建模,并把运动模型与基于象素点亮度变化的方法相结合来估计摄像机运动参数。用得到的运动参数可以把视频构造成一幅全景图,全景图可广泛应用于视频压缩与检索。实验表明,该方法可成功地应用于视频会议系统中的视频压缩与视频检索。 展开更多
关键词 摄像机 运动估计 全景图 视频压缩 多媒体技术
下载PDF
融合多模态信息的电视视频检索系统设计
20
作者 张玉艳 《电视技术》 2024年第4期40-42,共3页
随着互联网和数字技术的快速发展,视频数据在网络中的重要性日益凸显,用户对视频检索的需求随之增加。针对传统视频检索方法在挖掘视频内容方面的局限性,设计一种融合多模态信息的电视视频检索系统,利用深度学习技术从视频中提取图像和... 随着互联网和数字技术的快速发展,视频数据在网络中的重要性日益凸显,用户对视频检索的需求随之增加。针对传统视频检索方法在挖掘视频内容方面的局限性,设计一种融合多模态信息的电视视频检索系统,利用深度学习技术从视频中提取图像和文本信息,并建立检索模型以进行信息融合,最后为融合后的信息建立储存服务器,实现更准确、更全面的视频检索。 展开更多
关键词 多模态信息 电视视频 检索系统
下载PDF
上一页 1 2 30 下一页 到第
使用帮助 返回顶部