摘要
针对严重交通事故现场出现火灾、浓烟等情况影响探测设备完成被困人员搜救的问题,提出了一种基于卷积注意力机制自适应改进损失型双判别器条件生成对抗网络(CBAMIL-DDCGAN)的红外和可见图像融合方法。首先,利用添加注意力特征融合模块的解码网络从空间和通道对图像进行恢复重建。然后,设计了一种基于梯度信息的自适应权重计算方法。最后,进行了融合图像连续帧的测试实验。实验结果表明,本文图像融合算法表现出色,相较于传统算法和生成对抗网络算法,在PSNR、SSIM和MSE等指标上均取得了超过7%的显著提升,验证了该融合算法在复杂交通事故救援中的可行性和优越性。
Aiming at the problem that fire,smoke and other situations at the scene of serious traffic accidents affect the detection equipment to complete the search and rescue of trapped persons,a Convolutional Block Attention Module-Improved Loss-Dual-Discriminator Conditional Generative Based on Convolutional Attention Mechanism Adversarial Network(CBAM-IL-DDCGAN)infrared and visible image fusion method is proposed.Firstly,the decoding network with attention feature fusion module is used to restore and reconstruct the image from space and channel.Secondly,an adaptive weight calculation method based on gradient information is designed.Finally,the test experiment of fused image continuous frames was carried out.Experimental results show that the proposed image fusion algorithm performs well,and achieves a significant improvement of more than 7%in PSNR,SSIM and MSE compared with the traditional algorithm and the generative adversarial network algorithm.These results verify the feasibility and superiority of the fusion algorithm in complex traffic accident rescue.
作者
江晟
王鹏朗
邓志吉
别一鸣
JIANG Sheng;WANG Peng-lang;DENG Zhi-ji;BIE Yi-ming(College of Physics,Changchun University of Science and Technology,Changchun 130022,China;Zhejiang Dahua Technology Co.,Ltd.,Hangzhou 310051,China;College of Transportation,Jilin University,Changchun 130012,China)
出处
《吉林大学学报(工学版)》
EI
CAS
CSCD
北大核心
2023年第12期3472-3480,共9页
Journal of Jilin University:Engineering and Technology Edition
基金
吉林省科技发展计划重点研发项目(20210203214SF)。
关键词
交通运输系统工程
红外图像
可见光图像
图像融合
判别器
注意力机制
transportation systems engineering
infrared image
visible light image
image fusion
discriminator
attention mechanism