摘要
针对现有方法忽略照明不平衡、存在对比度低、纹理细节丢失等问题,本研究提出一种基于照明感知和密集网络的红外与可见光图像融合方法。首先,从可见光图像中获取照明概率并计算照明感知权重以指导训练网络,通过特征提取与信息度量模块来计算源图像的自适应信息保留度,用于保持融合结果与源图像间的自适应相似性。同时,照明感知损失与相似性约束损失函数使模型在结构、对比度、亮度上能够全天候地生成包含显著目标和丰富纹理细节信息的融合图像。本研究在TNO与MSRS 2个公共数据集上进行主、客观评估。实验结果表明,本研究弥补了照明不平衡的缺陷,在保留更多红外目标的同时,也有效地保留了更多可见光图像的纹理细节信息。
Aiming at the problems of ignoring illumination imbalance,low contrast and texture detail loss in existing methods,this paper proposed a fusion method of infrared and visible image based on illumination perception and dense network.Firstly,the illumination probability was obtained from the visible image and the illumination perception weight was calculated to guide the training network.The adaptive information retention of the source image was calculated by the feature extraction and information measurement module to maintain the adaptive similarity between the fusion result and the source image.At the same time,the illumination perception loss and the similarity constraint loss function enabled the model to generate all-weather fusion images containing significant objects and rich texture details in terms of structure,contrast and brightness.In this study,two public data sets,TNO and MSRS,were used for subjective and objective assessment.The experimental results show that this study can make up for the defect of illumination imbalance,and effectively retain more texture details of visible images while preserving more infrared targets.
作者
张杰
许光宇
陈浩宇
ZHANG Jie;XU Guangyu;CHEN Haoyu(College of Computer Science and Engineering,Anhui University of Science and Technology,Huainan,232001,China)
出处
《海南师范大学学报(自然科学版)》
CAS
2024年第1期37-45,共9页
Journal of Hainan Normal University(Natural Science)
基金
国家自然科学基金项目(61471004)
安徽理工大学研究生创新基金(2022CX2125)。