期刊文献+

基于虚拟属性学习的文本-图像行人检索方法

Text-based Person Search via Virtual Attribute Learning
下载PDF
导出
摘要 文本-图像行人检索旨在从行人数据库中查找符合特定文本描述的行人图像.近年来受到学术界和工业界的广泛关注.该任务同时面临两个挑战:细粒度检索以及图像与文本之间的异构鸿沟.部分方法提出使用有监督属性学习提取属性相关特征,在细粒度上关联图像和文本.然而属性标签难以获取,导致这类方法在实践中表现不佳.如何在没有属性标注的情况下提取属性相关特征,建立细粒度的跨模态语义关联成为亟待解决的关键问题.为解决这个问题,融合预训练技术提出基于虚拟属性学习的文本-图像行人检索方法,通过无监督属性学习建立细粒度的跨模态语义关联.第一,基于行人属性的不变性和跨模态语义一致性提出语义引导的属性解耦方法,所提方法利用行人的身份标签作为监督信号引导模型解耦属性相关特征.第二,基于属性之间的关联构建语义图提出基于语义推理的特征学习模块,所提模块通过图模型在属性之间交换信息增强特征的跨模态识别能力.在公开的文本-图像行人检索数据集CUHK-PEDES和跨模态检索数据集Flickr30k上与现有方法进行实验对比,实验结果表明了所提方法的有效性. The text-based person search aims to find the image of the target person conforming to a given text description from a person database,which has attracted the attention of researchers from academia and industry.It faces two challenges:fine-grained retrieval and a heterogeneous gap between images and texts.Some methods propose to use supervised attribute learning to obtain attribute-related features and build fine-grained associations between tests and images.The attribute annotations,however,are hard to obtain,which leads to poor performance of these methods in practice.Determining how to extract attribute-related features without attribute annotations and establish fine-grained and cross-modal semantic associations becomes a key problem to be solved.To address this issue,this study incorporates the pre-training technology and proposes a text-based person search via virtual attribute learning,which builds the cross-modal semantic associations between images and texts at a fine-grained level through unsupervised attribute learning.Specifically,in view of the invariance and cross-modal consistency of pedestrian attributes,a semantics-guided attribute decoupling method is proposed,which utilizes identity labels as the supervision signal to guide the model to decouple attribute-related features.Then,a feature learning module based on semantic reasoning is presented,which utilizes the relations between attributes to construct a semantic graph.This model uses the graph model to exchange information among attributes to enhance the cross-modal identification ability of features.The proposed approach is compared with existing methods on the public text-based person search dataset CUHK-PEDES and cross-modal retrieval dataset Flickr30k,and the experimental results verify the effectiveness of the proposed approach.
作者 王成济 苏家威 罗志明 曹冬林 林耀进 李绍滋 WANG Cheng-Ji;SU Jia-Wei;LUO Zhi-Ming;CAO Dong-Lin;LIN Yao-Jin;LI Shao-Zi(School of Informatics,Xiamen University,Xiamen 361005,China;School of Computer Science,Minnan Normal University,Zhangzhou 363000,China;Key Laboratory of Data Science and Intelligence Application(Minnan Normal University),Zhangzhou 363000,China)
出处 《软件学报》 EI CSCD 北大核心 2023年第5期2035-2050,共16页 Journal of Software
基金 国家自然科学基金(61876159,62076210,62076116)。
关键词 行人检索 跨模态 属性学习 预训练 person search cross-modality attribute learning pre-training
  • 相关文献

参考文献5

二级参考文献10

  • 1吴飞,庄越挺.互联网跨媒体分析与检索:理论与算法[J].计算机辅助设计与图形学学报,2010,22(1):1-9. 被引量:34
  • 2杨涛,李静,潘泉,张艳宁.基于场景模型与统计学习的鲁棒行人检测算法[J].自动化学报,2010,36(4):499-508. 被引量:19
  • 3郭立君,刘曦,赵杰煜,史忠植.结合运动信息与表观特征的行人检测方法[J].软件学报,2012,23(2):299-309. 被引量:12
  • 4刘威,段成伟,遇冰,柴丽颖,袁淮,赵宏.基于后验HOG特征的多姿态行人检测[J].电子学报,2015,43(2):217-224. 被引量:31
  • 5高君宇,杨小汕,张天柱,徐常胜.基于深度学习的鲁棒性视觉跟踪方法[J].计算机学报,2016,39(7):1419-1434. 被引量:30
  • 6杜宇宁,艾海舟.基于二次相似度函数学习的行人再识别[J].计算机学报,2016,39(8):1639-1651. 被引量:7
  • 7齐美彬,王慈淳,蒋建国,李佶.多特征融合与交替方向乘子法的行人再识别[J].中国图象图形学报,2018,23(6):827-836. 被引量:7
  • 8朱福庆,孔祥维,付海燕,田奇.两路互补对称CNN结构的行人再识别[J].中国图象图形学报,2018,23(7):1052-1060. 被引量:4
  • 9桑海峰,王传正,吕应宇,何大阔,刘晴.基于多信息流动卷积神经网络的行人再识别[J].电子学报,2019,47(2):351-357. 被引量:8
  • 10M.Ablikim,M.N.Achasov,P.Adlarson,S.Ahmed,M.Albrecht,M.Alekseev,A.Amoroso,F.F.An,Q.An,Y.Bai,O.Bakina,R.Baldini Ferroli,Y.Ban,K.Begzsuren,J.V.Bennett,N.Berger,M.Bertani,D.Bettoni,F.Bianchi,J Biernat,J.Bloms,I.Boyko,R.A.Briere,L.Calibbi,H.Cai,X.Cai,A.Calcaterra,G.F.Cao,N.Cao,S.A.Cetin,J.Chai,J.F.Chang,W.L.Chang,J.Charles,G.Chelkov,Chen,G.Chen,H.S.Chen,J.C.Chen,M.L.Chen,S.J.Chen,Y.B.Chen,H.Y.Cheng,W.Cheng,G.Cibinetto,F.Cossio,X.F.Cui,H.L.Dai,J.P.Dai,X.C.Dai,A.Dbeyssi,D.Dedovich,Z.Y.Deng,A.Denig,Denysenko,M.Destefanis,S.Descotes-Genon,F.De Mori,Y.Ding,C.Dong,J.Dong,L.Y.Dong,M.Y.Dong,Z.L.Dou,S.X.Du,S.I.Eidelman,J.Z.Fan,J.Fang,S.S.Fang,Y.Fang,R.Farinelli,L.Fava,F.Feldbauer,G.Felici,C.Q.Feng,M.Fritsch,C.D.Fu,Y.Fu,Q.Gao,X.L.Gao,Y.Gao,Y.Gao,Y.G.Gao,Z.Gao,B.Garillon,I.Garzia,E.M.Gersabeck,A.Gilman,K.Goetzen,L.Gong,W.X.Gong,W.Gradl,M.Greco,L.M.Gu,M.H.Gu,Y.T.Gu,A.Q.Guo,F.K.Guo,L.B.Guo,R.P.Guo,Y.P.Guo,A.Guskov,S.Han,X.Q.Hao,F.A.Harris,K.L.He,F.H.Heinsius,T.Held,Y.K.Heng,Y.R.Hou,Z.L.Hou,H.M.Hu,J.F.Hu,T.Hu,Y.Hu,G.S.Huang,J.S.Huang,X.T.Huang,X.Z.Huang,Z.L.Huang,N.Huesken,T.Hussain,W.Ikegami Andersson,W.Imoehl,M.Irshad,Q.Ji,Q.P.Ji,X.B.Ji,X.L.Ji,H.L.Jiang,X.S.Jiang,X.Y.Jiang,J.B.Jiao,Z.Jiao,D.P.Jin,S.Jin,Y.Jin,T.Johansson,N.Kalantar-Nayestanaki,X.S.Kang,R.Kappert,M.Kavatsyuk,B.C.Ke,I.K.Keshk,T.Khan,A.Khoukaz,P.Kiese,R.Kiuchi,R.Kliemt,L.Koch,O.B.Kolcu,B.Kopf,M.Kuemmel,M.Kuessner,A.Kupsc,M.Kurth,M.G.Kurth,W.Kuhn,J.S.Lange,P.Larin,L.Lavezzi,H.Leithoff,T.Lenz,C.Li,Cheng Li,D.M.Li,F.Li,F.Y.Li,G.Li,H.B.Li,H.J.Li,J.C.Li,J.W.Li,Ke Li,L.K.Li,Lei Li,P.L.Li,P.R.Li,Q.Y.Li,W.D.Li,W.G.Li,X.H.Li,X.L.Li,X.N.Li,X.Q.Li,Z.B.Li,H.Liang,H.Liang,Y.F.Liang,Y.T.Liang,G.R.Liao,L.Z.Liao,J.Libby,C.X.Lin,D.X.Lin,Y.J.Lin,B.Liu,B.J.Liu,C.X.Liu,D.Liu,D.Y.Liu,F.H.Liu,Fang Liu,Feng Liu,H.B.Liu,H.M.Liu,Huanhuan Liu,Huihui Liu,J.B.Liu,J.Y.Liu,K.Y.Liu,Ke Liu,Q.Liu,S.B.Liu,T.Liu,X.Liu,X.Y.Liu,Y.B.Liu,Z.A.Liu,Zhiqing Liu,Y.F.Long,X.C.Lou,H.J.Lu,J.D.Lu,J.G.Lu,Y.Lu,Y.P.Lu,C.L.Luo,M.X.Luo,P.W.Luo,T.Luo,X.L.Luo,S.Lusso,X.R.Lyu,F.C.Ma,H.L.Ma,L.L.Ma,M.M.Ma,Q.M.Ma,X.N.Ma,X.X.Ma,X.Y.Ma,Y.M.Ma,F.E.Maas,M.Maggiora,S.Maldaner,S.Malde,Q.A.Malik,A.Mangoni,Y.J.Mao,Z.P.Mao,S.Marcello,Z.X.Meng,J.G.Messchendorp,G.Mezzadri,J.Min,T.J.Min,R.E.Mitchell,X.H.Mo,Y.J.Mo,C.Morales Morales,N.Yu.Muchnoi,H.Muramatsu,A.Mustafa,S.Nakhoul,Y.Nefedov,F.Nerling,I.B.Nikolaev,Z.Ning,S.Nisar,S.L.Niu,S.L.Olsen,Q.Ouyang,S.Pacetti,Y.Pan,M.Papenbrock,P.Patteri,M.Pelizaeus,H.P.Peng,K.Peters,A.A.Petrov,J.Pettersson,J.L.Ping,R.G.Ping,A.Pitka,R.Poling,V.Prasad,M.Qi,T.Y.Qi,S.Qian,C.F.Qiao,N.Qin,X.P.Qin,X.S.Qin,Z.H.Qin,J.F.Qiu,S.Q.Qu,K.H.Rashid,C.F.Redmer,M.Richter,M.Ripka,A.Rivetti,V.Rodin,M.Rolo,G.Rong,J.L.Rosner,Ch.Rosner,M.Rump,A.Sarantsev,M.Savrie,K.Schoenning,W.Shan,X.Y.Shan,M.Shao,C.P.Shen,P.X.Shen,X.Y.Shen,H.Y.Sheng,X.Shi,X.D Shi,J.J.Song,Q.Q.Song,X.Y.Song,S.Sosio,C.Sowa,S.Spataro,F.F.Sui,G.X.Sun,J.F.Sun,L.Sun,S.S.Sun,X.H.Sun,Y.J.Sun,Y.K Sun,Y.Z.Sun,Z.J.Sun,Z.T.Sun,Y.T Tan,C.J.Tang,G.Y.Tang,X.Tang,V.Thoren,B.Tsednee,I.Uman,B.Wang,B.L.Wang,C.W.Wang,D.Y.Wang,H.H.Wang,K.Wang,L.L.Wang,L.S.Wang,M.Wang,M.Z.Wang,Wang Meng,P.L.Wang,R.M.Wang,W.P.Wang,X.Wang,X.F.Wang,X.L.Wang,Y.Wang,Y.F.Wang,Z.Wang,Z.G.Wang,Z.Y.Wang,Zongyuan Wang,T.Weber,D.H.Wei,P.Weidenkaff,H.W.Wen,S.P.Wen,U.Wiedner,G.Wilkinson,M.Wolke,L.H.Wu,L.J.Wu,Z.Wu,L.Xia,Y.Xia,S.Y.Xiao,Y.J.Xiao,Z.J.Xiao,Y.G.Xie,Y.H.Xie,T.Y.Xing,X.A.Xiong,Q.L.Xiu,G.F.Xu,L.Xu,Q.J.Xu,W.Xu,X.P.Xu,F.Yan,L.Yan,W.B.Yan,W.C.Yan,Y.H.Yan,H.J.Yang,H.X.Yang,L.Yang,R.X.Yang,S.L.Yang,Y.H.Yang,Y.X.Yang,Yifan Yang,Z.Q.Yang,M.Ye,M.H.Ye,J.H.Yin,Z.Y.You,B.X.Yu,C.X.Yu,J.S.Yu,C.Z.Yuan,X.Q.Yuan,Y.Yuan,A.Yuncu,A.A.Zafar,Y.Zeng,B.X.Zhang,B.Y.Zhang,C.C.Zhang,D.H.Zhang,H.H.Zhang,H.Y.Zhang,J.Zhang,J.L.Zhang,J.Q.Zhang,J.W.Zhang,J.Y.Zhang,J.Z.Zhang,K.Zhang,L.Zhang,S.F.Zhang,T.J.Zhang,X.Y.Zhang,Y.Zhang,Y.H.Zhang,Y.T.Zhang,Yang Zhang,Yao Zhang,Yi Zhang,Yu Zhang,Z.H.Zhang,Z.P.Zhang,Z.Q.Zhang,Z.Y.Zhang,G.Zhao,J.W.Zhao,J.Y.Zhao,J.Z.Zhao,Lei Zhao,Ling Zhao,M.G.Zhao,Q.Zhao,S.J.Zhao,T.C.Zhao,Y.B.Zhao,Z.G.Zhao,A.Zhemchugov,B.Zheng,J.P.Zheng,Y.Zheng,Y.H.Zheng,B.Zhong,L.Zhou,L.P.Zhou,Q.Zhou,X.Zhou,X.K.Zhou,Xingyu Zhou,Xiaoyu Zhou,Xu Zhou,A.N.Zhu,J.Zhu,J.Zhu,K.Zhu,K.J.Zhu,S.H.Zhu,W.J.Zhu,X.L.Zhu,Y.C.Zhu,Y.S.Zhu,Z.A.Zhu,J.Zhuang,B.S.Zou,J.H.Zou,无.Future Physics Programme of BESⅢ[J].Chinese Physics C,2020,44(4). 被引量:538

共引文献145

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部