期刊文献+

一种基于稀疏字典和残余字典的遥感图像超分辨重建算法 被引量:5

Remote Sensing Image Super-resolution by Using Sparse Dictionary and Residual Dictionary
下载PDF
导出
摘要 为了解决低分辨率遥感图像超分辨率重建问题,提出一种基于稀疏字典和结构自相似性的遥感图像超分辨方法。首先,引入了稀疏字典学习方法,改善了字典的结构性,得到的字典具有较好的正则性与灵活性。此外,为了更好地重建高分辨率图像,学习初始稀疏字典对和残余稀疏字典对。初始稀疏字典对用于重建初始高分辨率遥感图像;初始高分辨率遥感图像相对于原始高分辨率图像失去了部分细节信息,用残余稀疏字典对对图像的残留信息进行重建。最后,根据遥感图像存在大量的结构相似性特性,利用非局部均值算法对重建图像进行修正。实验结果表明,本算法与其他算法相比,图像质量在主观和客观方面都有所提高,峰值信噪(PSNR)比达到24.690 5,SSIM达到0.736 3。 A super resolution( SR) method for remote sensing images based on sparse dictionary and structural self-similarity was proposed. Furthermore,compared with correspondence between LR and HR image patches from a conventional SR methods,two dictionary pairs,i. e. primitive sparse dictionary pair and residual sparse dictionary pair, are raining database. The primitive sparse dictionary pair is learned to reconstruct initial high-resolution( HR) remote sensing image from a single low-resolution( LR) input. However,the initial HR remote sensing image loses some details compare with the corresponding original HR image completely. Therefore,residual sparse dictionary pair is learned to reconstruct residual information. Finally,self-similarity structural widely exist in remote sensing images and this feature can be used to correct the reconstructed image by nonlocal means( NLM)method. Experimental results showed that the proposed algorithm provides better subjective and objective quality,when compared to the conventional algorithms and its PSNR is 24. 690 5,SSIM is 0. 736 3.
出处 《四川大学学报(工程科学版)》 EI CAS CSCD 北大核心 2015年第3期71-76,共6页 Journal of Sichuan University (Engineering Science Edition)
基金 中国科学院数字地球重点实验室开放基金资助项目(2012LDE016) 武汉大学测绘遥感信息工程国家重点实验室开放基金项目(12R03) 高等学校博士学科点专项科研基金项目(20130181120005) 国家自然科学基金项目(61271330 6141101009) 四川省科技支撑计划项目(2014GZ0005) 中国博士后科学基金项目(2014M552357) 南京邮电大学江苏省图像处理与图像通信重点实验室开放基金项目(LBEK2013001)
关键词 遥感图像 超分辨率 字典学习 稀疏表示 remote sensing images super-resolution dictionary learning sparse representation
  • 相关文献

同被引文献38

  • 1徐元进,胡光道.穷举法在高光谱遥感图像地物识别中的应用[J].四川大学学报(工程科学版),2007(S1):168-173. 被引量:2
  • 2张敏情,狄富强,刘佳.基于选择性集成分类器的通用隐写分析[J].四川大学学报(工程科学版),2015,47(1):36-41. 被引量:5
  • 3Chang E, Goh K, Sychay G, et al. CBSA: Content-based soft annotation for multimodal image retrieval using hayes point machines [ J]. IEEE Transactions on Circuits and Systems for Video Technology,2003,13 ( 1 ) :26 - 38.
  • 4Cusano C, Ciocca G, Schettini R. Image annotation using SVM[ C]//International Society for Optics and Photon- ics. San Jose : SPIE ,2003:330 - 338.
  • 5Carneiro G, Chan A B, MorenoP J,et al. Supervised learn- ing of semantic classes for image annotation and retrieval [ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence ,2007,29 (3) :394 - 410.
  • 6Duygulu P,Barnard K,Freitas J F G, et al. Object recog- nition as machine translation: Laming a lexicon for a fixed image vocabulary[C]//Proceedings of the European Conference on Computer Vision--Part IV. Berlin: Spring- er-Verlag, 2002 : 97 - 112.
  • 7Feng S L, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation [ C ]/! Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE ,2004 : 1002 - 1009.
  • 8Carneiro G, Chan A B, Moreno P J, et al. Supervised learning of semantic classes for image annotation and re- trieval[ J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007,29 ( 3 ) : 394 - 410.
  • 9. Makadia A,Pavlovic V,Kumar S. A new baseline for im- age annotation[C]//Proceedings of the European Confer- ence on Computer Vision. Berlin: Springer, 2008, 5304 (3) :316 -329.
  • 10Guillaumin M,Mensink M,Verbeek J,et al. Tagprop- Dis- criminative metric learning in nearest neighbor models for image auto-annotation[ C]//Proceedings of the IEEE In- ternational Conference on Computer Vision. New York: IEEE ,2009:309 - 316.

引证文献5

二级引证文献11

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部