期刊文献+

结合判别相关分析与特征融合的遥感图像检索 被引量:4

Remote sensing image retrieval combining discriminant correlation analysis and feature fusion
原文传递
导出
摘要 目的高分辨率遥感图像检索中,单一特征难以准确描述遥感图像的复杂信息。为了充分利用不同卷积神经网络(convolutional neural networks,CNN)的学习参数来提高遥感图像的特征表达提出一种基于判别相关分析的方法融合不同CNN的高层特征。方法将高层特征作为特殊的卷积层特征处理,为了更好地保留图像的原始空间信息在图像的原始输入尺寸下提取不同高层特征再对高层特征进行最大池化来获得显著特征;计算高层特征的类间散布矩阵,结合判别相关分析来增强同类特征的联系,并突出不同类特征之间的差异,从而提高特征的判别力;选择串联与相加两种方法来对不同特征进行融合,用所得融合特征来检索高分辨率遥感图像。结果在UC-Merced、RSSCN7和WHU-RS19数据集上的实验表明与单一高层特征相比绝大多数融合特征的检索准确率和检索时间都得到有效改进。其中,在3个数据集上的平均精确率均值(mean average precision,mAP)分别提高了10.4%~14.1%、5.7%~9.9%和5.9%~17.6%。以检索能力接近的特征进行融合时,性能提升更明显。在UC-Merced数据集上融合特征的平均归一化修改检索等级(average normalized modified retrieval rank,ANMRR)和mAP达到13.21%和84.06%与几种较新的遥感图像检索方法相比有一定优势。结论本文提出的基于判别相关分析的特征融合方法有效结合了不同CNN高层特征的显著信息在降低特征冗余性的同时,提升了特征的表达能力,从而提高了遥感图像的检索性能。 Objective With the rapid development of remote sensing technology,numerous high-resolution remote sensing images have become available.As a result,the effective retrieval of remote sensing images has become a challenging research topic.Feature extraction is key to determining the retrieval performance of high-resolution remote sensing image retrieval tasks.Traditional feature extraction methods are mainly based on handcrafted features,whereas such shallow features are easily affected by artificial intervention.Convolutional neural networks(CNNs)can learn feature representations automatically,and thus are suitable to deal with high-resolution remote sensing images with complex content.However,the parameters of CNNs are difficult to train fully due to the small scale of currently available public remote sensing datasets.In this case,the transfer learning of CNNs has attracted much attention.CNNs pretrained on large-scale datasets have good generalization ability,and parameters can be transferred to small-scale data effectively.Therefore,extracting CNN features on the basis of transfer learning has become an effective method in the field of remote sensing image retrieval.Given the abundant and complex visual content of high-resolution remote sensing images,it is difficult to accurately express the content of remote sensing images using a single feature.Thus,feature fusion is a useful method to improve the feature representation of remote sensing images.To maximize the learning parameters of different CNNs to represent the content of remote sensing images,a method based on discriminant correlation analysis(DC A)is proposed to fuse the high-level features of different CNNs.Method First,CNN parameters from VGGM(visual geometry group medium),VGG(visual geometry group)16,GoogLeNet,and ResNet50 are transferred for high-resolution remote sensing images,and the high-level features are adopted as special convolutional features.To preserve the original spatial information of the image,the high-level features are extracted under the original input image size,and the output form of three-dimensional tensor is retained.Then,max pooling is adopted on the high-level features to extract salient features.Second,DCA is adopted to enhance the feature representation.The DCA is the first to incorporate the class structure into the feature level fusion and has low computational complexity.To maximize the correlation of corresponding features across the two feature sets and in the same time decorrelates features that belong to different classes within each feature set,the between-class scatter matrices of the two sets of high-level features are calculated,and matrix diagonalization and singular value decomposition are adopted to transform the features.The transformed matrix contains the important eigenvectors of the between-class scatter matrix,and the dimension of the transformed matrix is reduced accordingly.Thus,the transformed feature vectors have strong discriminative power and low dimension.Lastly,two methods of concatenation and summation are selected to perform the fusion of transformed feature vectors,and the fused features are normalized via Gaussian normalization.The similarities between the query and dataset features are calculated using the Euclidean distance method,and the retrieval results are returned in accordance with the sort of similarities.Result Experiment results on the UC-Merced,RSSCN7,and WHU-RS19 datasets show that the retrieval accuracy and retrieval time of most fusion features are effectively improved in comparison with a single high-level feature;the mean average precision(mAP)of the fusion feature is improved by 10.4%~14.1%,5.7%~9.9%,and 5.9%~17.6%,respectively.The retrieval results of the fused features using the concatenation method are better than that using the summation method.Multifeature fusion experiments show that the best result on the UC-Merced dataset is obtained from the fusion of four features,whereas the best results on the RSSCN7 and WHU-RS19 datasets are obtained from the fusion of three features.This finding indicates that a larger number of fused features does not translate into better performance;selecting the appropriate features is crucial for feature fusion.Especially,when the different features have good representation and similar retrieval capabilities,the fusion of these features can achieve good retrieval performance.Compared with other state-of-the-art approaches,the average normalized modified retrieval rank(ANMRR)and mAP of the proposed fused feature on the UC-Merced dataset reach 0.1321 and 84.06%,respectively.Experimental results demonstrate that our method outperforms state-of-the-art approaches.Conclusion The feature fusion method based on discriminant correlation analysis combines the salient information of different high-level features.This method reduces feature redundancy while improving feature discrimination.Features with equivalent retrieval capabilities can be fused by the proposed method well,thus effectively improving the retrieval performance of high-resolution remote sensing images.
作者 葛芸 马琳 储珺 Ge Yun;Ma Lin;Chu Jun(School of Software,Nanchang Hangkong University,Nanchang 330063,China)
出处 《中国图象图形学报》 CSCD 北大核心 2020年第12期2665-2676,共12页 Journal of Image and Graphics
基金 国家自然科学基金项目(41801288,61663031,41261091,61762067,61866028) 江西省自然科学基金项目(20202BAB212011) 南昌航空大学博士科研启动金项目(EA201920276)。
关键词 遥感图像检索 卷积神经网络 高层特征融合 判别相关分析 最大池化 remote sensing image retrieval convolutional neural network(CNN) high-level feature fusion discriminant correlation analysis(DCA) max pooling
  • 相关文献

参考文献4

二级参考文献42

  • 1孟繁杰,郭宝龙.CBIR关键技术研究[J].计算机应用研究,2004,21(7):21-24. 被引量:17
  • 2孙权森,曾生根,王平安,夏德深.典型相关分析的理论及其在特征融合中的应用[J].计算机学报,2005,28(9):1524-1533. 被引量:89
  • 3Niblack W, Jose S, Barber R, et al. The QBIC project:query images by content using color, texture and shape Proceeding of SPIE[C], San Joe, California, USA, 1993.1908:173-187.
  • 4Marques O, Furht B. MUSE: content-based image search and retrieval system using relevance feedback[J]. Multimedia Toolsand Applications, 2002, 17(4): 21-50.
  • 5Sheikholeslami G, Zhang Ai-dong. A multi-resolution contentbased retrieval approach for geographic images [J].Geolnformatica, 1999, 3(2): 109-139.
  • 6Kitamoto A, Takagi M. Retrieval of satellite cloud imagery based on subiective similarity[A]. In:Proceedings of the 9th Scandinavian Conference on Image Analysis (SCIA'95)[C],Uppsala, Sweden, 1995, 6:449-456.
  • 7Zhu Bin, Ramsey M, Hsinchun Chen. Creating a large-scale content-based airphoto image digital library[J]. IEEE Transactions on Image Processing, 2000, 9(1) : 163-167.
  • 8Jain Anil K, Farshid Farroknia. Unsupervised texture segmentation using Gabor filters[J]. Pattern Recognition. 1991 ,12(24) : 1167-1186.
  • 9Tan Kian 1.ee, Ooi Beng Chin, Yee Chia Yeow. An evaluation of color-spatial retrieval techniques for large image databases[J]. Multimedia Tools and Applications, 2001.14(1):55-78.
  • 10居红云,张俊本,李朝峰,王正友.基于K-means与SVM结合的遥感图像全自动分类方法[J].计算机应用研究,2007,24(11):318-320. 被引量:23

共引文献58

同被引文献23

引证文献4

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部