摘要
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.
为提高多传感器融合图像的细节保持性与目标信息完整性,提出一种基于非下采样轮廓波变换(Non-Subsampling Contourlet transform,NSCT)与GoogLeNet神经网络模型相结合的异传感器图像融合算法。本文采用的异传感器图像为红外与可见光图像,首先将红外与可见光图像分别进行NSCT变换,分解得到一个低频子带系数和一系列多尺度、多方向的高频子带系数;然后将高频子带系数采用区域能量取大策略进行融合,将低频子带系数输入GoogLeNet神经网络模型中提取特征图,从特征图中自适应计算加权融合的权值矩阵参数,加权求和得到融合后的低频子带系数;最后经过NSCT逆变换得到融合图像。实验结果表明,该算法有效地提高了图像视觉效果,且在边缘保持度、互信息等客观指标上有明显提高。
基金
supported by the National Natural Science Foundation of China(No.61301211)
the China Scholarship Council(No.201906835017)