This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive...This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells'mechanism.The research presented here is segmented into three components.In the first segment,we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels:day and night.The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images.For the daytime images,where visible light information is predominant,we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel,respectively.Conversely,for nighttime images where infrared information takes precedence,the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel.The outcome is a pseudo-color fused image.The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information.These images are then compared with those obtained by six other methods cited in the relevant reference.The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency.Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results.Four sets of fused images did not perform as well as the comparison in the QAB/F index.In conclusion,the fused images generated through the proposed method show superior performance in terms of scene detail,visual perception,and image sharpness when compared with their counterparts from other methods.展开更多
Erratum to:Journal of Bionic Engineering https:/doi.org/10.1007/s42235-024-00496-5.In this article the author’s name Hongmin Zou was incorrectly written as Zongmin Zou.The original article has been corrected.
基金supported by the National Natural Science Foundation of China(NSFC)under grant numbers 61201368Jilin Province Science and technology Department key research and development projecty Research and Development(grant no.20230201043GX).
文摘This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells'mechanism.The research presented here is segmented into three components.In the first segment,we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels:day and night.The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images.For the daytime images,where visible light information is predominant,we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel,respectively.Conversely,for nighttime images where infrared information takes precedence,the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel.The outcome is a pseudo-color fused image.The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information.These images are then compared with those obtained by six other methods cited in the relevant reference.The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency.Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results.Four sets of fused images did not perform as well as the comparison in the QAB/F index.In conclusion,the fused images generated through the proposed method show superior performance in terms of scene detail,visual perception,and image sharpness when compared with their counterparts from other methods.
文摘Erratum to:Journal of Bionic Engineering https:/doi.org/10.1007/s42235-024-00496-5.In this article the author’s name Hongmin Zou was incorrectly written as Zongmin Zou.The original article has been corrected.