期刊文献+

Multi-focus image fusion based on block matching in 3D transform domain 被引量:5

Multi-focus image fusion based on block matching in3D transform domain
下载PDF
导出
摘要 Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods. Fusion methods based on multi-scale transforms have become the mainstream of the pixel-level image fusion. However,most of these methods cannot fully exploit spatial domain information of source images, which lead to the degradation of image.This paper presents a fusion framework based on block-matching and 3D(BM3D) multi-scale transform. The algorithm first divides the image into different blocks and groups these 2D image blocks into 3D arrays by their similarity. Then it uses a 3D transform which consists of a 2D multi-scale and a 1D transform to transfer the arrays into transform coefficients, and then the obtained low-and high-coefficients are fused by different fusion rules. The final fused image is obtained from a series of fused 3D image block groups after the inverse transform by using an aggregation process. In the experimental part, we comparatively analyze some existing algorithms and the using of different transforms, e.g. non-subsampled Contourlet transform(NSCT), non-subsampled Shearlet transform(NSST), in the 3D transform step. Experimental results show that the proposed fusion framework can not only improve subjective visual effect, but also obtain better objective evaluation criteria than state-of-the-art methods.
出处 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2018年第2期415-428,共14页 系统工程与电子技术(英文版)
基金 supported by the National Natural Science Foundation of China(61572063 61401308) the Fundamental Research Funds for the Central Universities(2016YJS039) the Natural Science Foundation of Hebei Province(F2016201142 F2016201187) the Natural Social Foundation of Hebei Province(HB15TQ015) the Science Research Project of Hebei Province(QN2016085 ZC2016040) the Natural Science Foundation of Hebei University(2014-303)
关键词 image fusion block matching 3D transform block-matching and 3D(BM3D) non-subsampled Shearlet transform(NSST) image fusion block matching 3D transform block-matching and 3D(BM3D) non-subsampled Shearlet transform(NSST)
  • 相关文献

参考文献1

二级参考文献6

共引文献36

同被引文献23

引证文献5

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部