期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
文本配图系统的设计与实现 被引量:1
1
作者 张明西 乐水波 +1 位作者 李学民 董一鹏 《包装工程》 CAS 北大核心 2020年第19期252-258,共7页
目的设计并开发文本配图系统,实现面向文本数据的在线自动配图。方法基于图片和文本之间的描述关系构建“图片-标签”二分网络,然后基于“图片-标签”的二分网络,利用重启随机游走模型进行图片与标签之间的相关性计算。采用TextRank模... 目的设计并开发文本配图系统,实现面向文本数据的在线自动配图。方法基于图片和文本之间的描述关系构建“图片-标签”二分网络,然后基于“图片-标签”的二分网络,利用重启随机游走模型进行图片与标签之间的相关性计算。采用TextRank模型提取关键字,并将关键字构成的集合作为查询,将关键字视为标签。基于离线计算结果,在线整合标签与图片之间的相关性,得到文本与图片的相关性。依据相关性由大到小进行排序,并返回前k个最相关的图片。结果实验结果表明,前5个返回结果的MAP值能够达到0.839,能够准确地返回用户期望的图片。结论系统能够依据输入文本进行准确的图片匹配。 展开更多
关键词 TF-IDF模型 文本配图 重启随机游走 TextRank模型
下载PDF
Region-Aware Fashion Contrastive Learning for Unified Attribute Recognition and Composed Retrieval
2
作者 WANG Kangping ZHAO Mingbo 《Journal of Donghua University(English Edition)》 CAS 2024年第4期405-415,共11页
Clothing attribute recognition has become an essential technology,which enables users to automatically identify the characteristics of clothes and search for clothing images with similar attributes.However,existing me... Clothing attribute recognition has become an essential technology,which enables users to automatically identify the characteristics of clothes and search for clothing images with similar attributes.However,existing methods cannot recognize newly added attributes and may fail to capture region-level visual features.To address the aforementioned issues,a region-aware fashion contrastive language-image pre-training(RaF-CLIP)model was proposed.This model aligned cropped and segmented images with category and multiple fine-grained attribute texts,achieving the matching of fashion region and corresponding texts through contrastive learning.Clothing retrieval found suitable clothing based on the user-specified clothing categories and attributes,and to further improve the accuracy of retrieval,an attribute-guided composed network(AGCN)as an additional component on RaF-CLIP was introduced,specifically designed for composed image retrieval.This task aimed to modify the reference image based on textual expressions to retrieve the expected target.By adopting a transformer-based bidirectional attention and gating mechanism,it realized the fusion and selection of image features and attribute text features.Experimental results show that the proposed model achieves a mean precision of 0.6633 for attribute recognition tasks and a recall@10(recall@k is defined as the percentage of correct samples appearing in the top k retrieval results)of 39.18 for composed image retrieval task,satisfying user needs for freely searching for clothing through images and texts. 展开更多
关键词 attribute recognition image retrieval contrastive language-image pre-training(CLIP) image text matching transformer
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部