期刊文献+

深度优先局部聚合哈希 被引量:2

Deep Priority Local Aggregated Hashing
下载PDF
导出
摘要 已有的深度监督哈希方法不能有效地利用提取到的卷积特征,同时,也忽视了数据对之间相似性信息分布对于哈希网络的作用,最终导致学到的哈希编码之间的区分性不足.为了解决该问题,提出了一种新颖的深度监督哈希方法,称之为深度优先局部聚合哈希(Deep Priority Local Aggregated Hashing,DPLAH).DPLAH将局部聚合描述子向量嵌入到哈希网络中,提高网络对同类数据的表达能力,并且通过在数据对之间施加不同权重,从而减少相似性信息分布倾斜对哈希网络的影响.利用Pytorch深度框架进行DPLAH实验,使用NetVLAD层对Resnet18网络模型输出的卷积特征进行聚合,将聚合得到的特征进行哈希编码学习.在CIFAR-10和NUS-WIDE数据集上的图像检索实验表明,与使用手工特征和卷积神经网络特征的非深度哈希学习算法的最好结果相比,DPLAH的平均准确率均值要高出11%,同时,DPLAH的平均准确率均值比非对称深度监督哈希方法高出2%. The existing deep supervised hashing methods cannot effectively utilize the extracted convolution features,but also ignore the role of the similarity information distribution between data pairs on the hash network,resulting in insufficient discrimination between the learned hash codes.In order to solve this problem,a novel deep supervised hashing method called deep priority locally aggregated hashing(DPLAH)is proposed in this paper,which embeds the vector of locally aggregated descriptors(VLAD)into the hash network,so as to improve the ability of the hash network to express the similar data,and reduce the impact of similarity distribution skew on the hash network by imposing different weights on the data pairs.DPLAH experiment is carried out by using the Pytorch deep framework.The convolution features of the Resnet18 network model output are aggregated by using the NetVLAD layer,and the hash coding is learned by using the aggregated features.The image retrieval experiments on the CIFAR-10 and NUS-WIDE datasets show that the mean average precision(MAP)of DPLAH is 11 percentage points higher than that of non-deep hash learning algorithms using manual features and convolution neural network features,and the MAP of DPLAH is 2 percentage points higher than that of asymmetric deep supervised hashing method.
作者 龙显忠 程成 李云 LONG Xianzhong;CHENG Cheng;LI Yun(School of Computer Science&Technology,Nanjing University of Posts and Telecommunications,Nanjing 210023,China;Key Laboratory of Jiangsu Big Data Security and Intelligent Processing,Nanjing 210023,China)
出处 《湖南大学学报(自然科学版)》 EI CAS CSCD 北大核心 2021年第6期58-66,共9页 Journal of Hunan University:Natural Sciences
基金 国家自然科学基金资助项目(61906098,61772284) 国家重点研发计划项目(2018YFB1003702)。
关键词 深度哈希学习 卷积神经网络 图像检索 局部聚合描述子向量 deep Hash learning convolutional neural network image retrieval vector of locally aggregated descriptors(VLAD)
  • 相关文献

参考文献1

二级参考文献52

  • 1Mayer-Sch?nberger V, Cukier K. Big Data: A Revolution That Will Transform How We Live, Work, and Think. Boston: Eamon Dolan/Houghton Mifflin Harcourt, 2013.
  • 2Hey T, Tansley S, Tolle K. The Fourth Paradigm: Data-Intensive Scientific Discovery. Redmond: Microsoft Research, 2009.
  • 3Bryant R E. Data-intensive scalable computing for scientific applications. Comput Sci Engin, 2011, 13: 25-33.
  • 4周志华. 机器学习与数据挖掘. 中国计算机学会通讯, 2007, 3: 35-44.
  • 5Zhou Z H, Chawla N V, Jin Y, et al. Big data opportunities and challenges: Discussions from data analytics perspectives. IEEE Comput Intell Mag, 2014, 9: 62-74.
  • 6Jordan M. Message from the president: The era of big data. ISBA Bull, 2011, 18: 1-3.
  • 7Kleiner A, Talwalkar A, Sarkar P, et al. The big data bootstrap. In: Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, 2012, 1759-1766.
  • 8Shalev-Shwartz S, Zhang T. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In: Proceedings of the 31st International Conference on Machine Learning (ICML), Beijing, 2014, 64-72.
  • 9Gonzalez J E, Low Y, Gu H, et al. PowerGraph: Distributed graph-parallel computation on natural graphs. In: Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI), Hollywood, 2012, 17-30.
  • 10Gao W, Jin R, Zhu S, et al. One-pass AUC optimization. In: Proceedings of the 30th International Conference on Machine Learning (ICML), Atlanta, 2013, 906-914.

共引文献45

同被引文献1

引证文献2

二级引证文献2

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部