摘要
传统的视觉词典法(Bag ofVisual Words,BoVW)具有时间效率低、内存消耗大以及视觉单词同义性和歧义性的问题,且当目标区域所包含的信息不能正确或不足以表达用户检索意图时就得不到理想的检索结果.针对这些问题,本文提出了基于随机化视觉词典组和上下文语义信息的目标检索方法.首先,该方法采用精确欧氏位置敏感哈希(Exact Euclidean Locality Sensitive Hashing,E2LSH)对局部特征点进行聚类,生成一组支持动态扩充的随机化视觉词典组;然后,利用查询目标及其周围的视觉单元构造包含上下文语义信息的目标模型;最后,引入K-L散度(Kullback-Leibler divergence)进行相似性度量完成目标检索.实验结果表明,新方法较好地提高了目标对象的可区分性,有效地提高了检索性能.
There are several problems existing in the conventional bag of visual words methods,such as low time efficiency and large memory consumption, the synonymy and polysemy of visual words, furthermore, they may fail to return satisfactory results if the object region is inaccurate or if the captured object is too small to be represented with discriminative features. An object re- trieval method based on randomized visual dictionaries and contextual semantic information is proposed for the above problems. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is used, and a group of scalable random visual vocabularies is generat- ed; then, a new object model consisting of contextual semantic information is devised, which is drawn from the visual dements sur- rounding the query object; finally, the Kullback-Leibler divergence is introduced as a similarity measurement to accomplish object re- trieval. Experimental results indicate that the distinguishability of objects is effectively improved and the object retrieval performance method is substantially boosted compared with the traditional methods.
出处
《电子学报》
EI
CAS
CSCD
北大核心
2012年第12期2472-2480,共9页
Acta Electronica Sinica
基金
国家自然科学基金(No.60872142)
全军军事学研究生课题资助项目
关键词
目标检索
上下文语义信息
精确欧氏位置敏感哈希
随机化视觉词典组
K-L散度
object retrieval
contextual semantic information
exact Euclidean locality sensitive hashing
randomized visual vocabularies
Kullback-Leibler divergence