期刊文献+
共找到380篇文章
< 1 2 19 >
每页显示 20 50 100
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding 被引量:1
1
作者 Chunming Wu Wukai Liu Xin Ma 《Computers, Materials & Continua》 SCIE EI 2024年第4期1441-1461,共21页
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne... A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations. 展开更多
关键词 image fusion Res2Net-Transformer infrared image visible image
下载PDF
CAEFusion: A New Convolutional Autoencoder-Based Infrared and Visible Light Image Fusion Algorithm 被引量:1
2
作者 Chun-Ming Wu Mei-Ling Ren +1 位作者 Jin Lei Zi-Mu Jiang 《Computers, Materials & Continua》 SCIE EI 2024年第8期2857-2872,共16页
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed... To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks. 展开更多
关键词 image fusion deep learning auto-encoder(AE) infrared visible light
下载PDF
Intelligent Fusion of Infrared and Visible Image Data Based on Convolutional Sparse Representation and Improved Pulse-Coupled Neural Network 被引量:3
3
作者 Jingming Xia Yi Lu +1 位作者 Ling Tan Ping Jiang 《Computers, Materials & Continua》 SCIE EI 2021年第4期613-624,共12页
Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion im... Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators. 展开更多
关键词 image fusion infrared image visible light image non-downsampling shear wave transform improved PCNN convolutional sparse representation
下载PDF
Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network 被引量:2
4
作者 Kanika Bhalla Deepika Koundal +2 位作者 Surbhi Bhatia Mohammad Khalid Imam Rahmani Muhammad Tahir 《Computers, Materials & Continua》 SCIE EI 2022年第3期5503-5518,共16页
Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve i... Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared(IR)/visible(VS)images.Dissimilarities in various kind of features in these images are vital to preserve in the single fused image.Hence,simultaneous preservation of both the aspects at the same time is a challenging task.However,most of the existing methods utilize the manual extraction of features;and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image.Therefore,this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.Firstly,fuzzification of two IR/VS images has been done by feeding it to the fuzzy sets to remove the uncertainty present in the background and object of interest of the image.Secondly,images have been learned by two parallel branches of the siamese convolutional neural network(CNN)to extract prominent features from the images as well as high-frequency information to produce focus maps containing source image information.Finally,the obtained focused maps which contained the detailed integrated information are directly mapped with the source image via pixelwise strategy to result in fused image.Different parameters have been used to evaluate the performance of the proposed image fusion by achieving 1.008 for mutual information(MI),0.841 for entropy(EG),0.655 for edge information(EI),0.652 for human perception(HP),and 0.980 for image structural similarity(ISS).Experimental results have shown that the proposed technique has attained the best qualitative and quantitative results using 78 publically available images in comparison to the existing discrete cosine transform(DCT),anisotropic diffusion&karhunen-loeve(ADKL),guided filter(GF),random walk(RW),principal component analysis(PCA),and convolutional neural network(CNN)methods. 展开更多
关键词 Convolutional neural network fuzzy sets infrared and visible image fusion deep learning
下载PDF
An infrared and visible image fusion method based upon multi-scale and top-hat transforms 被引量:1
5
作者 Gui-Qing He Qi-Qi Zhang +3 位作者 Hai-Xi Zhang Jia-Qi Ji Dan-Dan Dong Jun Wang 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第11期340-348,共9页
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar... The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced. 展开更多
关键词 infrared and visible image fusion multi-scale transform mathematical morphology top-hat trans- form
下载PDF
Sub-Regional Infrared-Visible Image Fusion Using Multi-Scale Transformation 被引量:1
6
作者 Yexin Liu Ben Xu +2 位作者 Mengmeng Zhang Wei Li Ran Tao 《Journal of Beijing Institute of Technology》 EI CAS 2022年第6期535-550,共16页
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc... Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods. 展开更多
关键词 image fusion infrared image visible image multi-scale transform
下载PDF
Multiscale feature learning and attention mechanism for infrared and visible image fusion
7
作者 GAO Li LUO DeLin WANG Song 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第2期408-422,共15页
Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared... Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics. 展开更多
关键词 infrared and visible images image fusion attention mechanism CNN feature extraction
原文传递
Multi-sensors Image Fusion via NSCT and GoogLeNet 被引量:4
8
作者 LI Yangyu WANG Caiyun YAO Chen 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2020年第S01期88-94,共7页
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeN... In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information. 展开更多
关键词 image fusion non-subsampling contourlet transform GoogLeNet neural network infrared image visible image
下载PDF
基于目标增强与鼠群优化的红外与可见光图像融合算法
9
作者 郝帅 孙曦子 +4 位作者 马旭 安倍逸 何田 李嘉豪 孙思雅 《西北工业大学学报》 EI CAS CSCD 北大核心 2024年第4期735-743,共9页
针对传统红外与可见光图像融合结果存在目标模糊、信息丢失问题,提出一种基于目标增强与鼠群优化的红外与可见光图像融合方法,记为TERSFuse。为了减少融合结果中原始图像细节信息丢失,分别构建了红外对比度增强模块和基于亮度感知的可... 针对传统红外与可见光图像融合结果存在目标模糊、信息丢失问题,提出一种基于目标增强与鼠群优化的红外与可见光图像融合方法,记为TERSFuse。为了减少融合结果中原始图像细节信息丢失,分别构建了红外对比度增强模块和基于亮度感知的可见光图像增强模块;利用拉普拉斯金字塔变换对红外和可见光增强图像进行多尺度分解,从而得到对应的高、低频图像;为了使融合结果充分保留原始图像信息,分别采用“最大绝对值”规则对红外和可见光高频图像进行融合以及通过计算权重系数对低频图像进行融合;设计了基于鼠群优化的图像重构模块以实现高频图像和低频图像重构权重的自适应分配,进而提高融合图像的视觉效果。为了验证所提算法优势,与7种经典融合算法进行比较,实验结果表明所提算法不仅具有良好的视觉效果,而且融合图像能够保留原始图像丰富的边缘纹理和对比度信息。 展开更多
关键词 图像融合 红外与可见光图像 多尺度变换 鼠群优化
下载PDF
面向双模态夜视图像的混合尺度融合算法
10
作者 刘文强 姜迈 +1 位作者 乔顺利 李宏达 《兵器装备工程学报》 CAS CSCD 北大核心 2024年第5期291-298,共8页
针对传统红外与可见光图像融合算法存在的细节模糊、对比度降低、背景信息缺失等不足,提出了一种基于混合尺度的红外与可见光融合方法。通过潜在低秩表示变换将源图像分解低秩子带和显著子带;利用非下采样轮廓波变换将低秩子带继续分解... 针对传统红外与可见光图像融合算法存在的细节模糊、对比度降低、背景信息缺失等不足,提出了一种基于混合尺度的红外与可见光融合方法。通过潜在低秩表示变换将源图像分解低秩子带和显著子带;利用非下采样轮廓波变换将低秩子带继续分解为低频分量与高频分量;针对显著子带采用基于卷积稀疏表示的方法进行融合;并结合全局均值、区域均值与能量的优势融合低频分量;利用权重决策图融合高频分量。基于自建库及公开库的实验结果表明,与其他5种图像融合算法相比,所提算法在充分继承源图像有效信息的同时,融合图像整体对比度更均衡,有效提升了融合图像的清晰度,包含更丰富的图像细节信息,在主客观评价上均取得了更好的效果。 展开更多
关键词 图像融合 混合尺度 卷积稀疏表示 红外图像 可见光图像
下载PDF
利用Transformer的多模态目标跟踪算法
11
作者 刘万军 梁林林 曲海成 《计算机工程与应用》 CSCD 北大核心 2024年第11期84-94,共11页
目前目标跟踪方法大多通过融合不同模态信息进行定位决策,存在信息提取不充分、融合方法简单、弱光场景无法准确跟踪目标的问题。为此,提出一种基于Transformer的多模态目标跟踪算法(Trans-RGBT):利用伪孪生网络对可见光图像和红外图像... 目前目标跟踪方法大多通过融合不同模态信息进行定位决策,存在信息提取不充分、融合方法简单、弱光场景无法准确跟踪目标的问题。为此,提出一种基于Transformer的多模态目标跟踪算法(Trans-RGBT):利用伪孪生网络对可见光图像和红外图像分别进行特征提取,并在特征层面充分融合;将首帧目标信息调制到待跟踪帧的特征向量中,得到一个专用跟踪器;应用Transformer的方法对视野中的目标进行编解码,通过空间位置预测分支预测目标在视野中的空间位置,并结合历史信息滤除干扰目标,得到目标的准确位置;使用矩形框回归网络预测目标的外接矩形框,从而实现目标准确跟踪。在最新的大规模数据集VTUAV、RGBT234上进行了实验,与孪生网络(Siambased)、滤波(filter-based)算法相比,Trans-RGBT精度更高、鲁棒性更好、速度接近实时,达22 FPS。 展开更多
关键词 多模态融合 可见光图像 红外图像 TRANSFORMER 目标跟踪
下载PDF
基于自编码器的红外与可见光图像融合算法
12
作者 陈海秀 房威志 +3 位作者 陆成 陆康 何珊珊 黄仔洁 《兵器装备工程学报》 CAS CSCD 北大核心 2024年第9期283-290,共8页
针对目前红外与可见光图像融合过程中,图像特征提取不充分、中间层信息丢失以及融合图像细节不够清晰的问题,提出了一种基于自编码器的端到端图像融合网络结构。该网络由编码器、融合网络和解码器3部分组成。将高效通道注意力机制和混... 针对目前红外与可见光图像融合过程中,图像特征提取不充分、中间层信息丢失以及融合图像细节不够清晰的问题,提出了一种基于自编码器的端到端图像融合网络结构。该网络由编码器、融合网络和解码器3部分组成。将高效通道注意力机制和混合注意力机制引入到编码器和融合网络中,利用卷积残差网络(convolutional residual network,CRN)基本块来提取并融合红外图像和可见光图像的基本特征,然后将融合后的特征图输入到解码器进行解码,重建出融合图像。选取目前具有典型代表性的5种方法在主客观方面进行对比。在客观方面,较第2名平均梯度、空间频率和视觉保真度分别提升了21%、10.2%、7.2%。在主观方面,融合后的图像目标清晰、细节突出、轮廓明显,符合人类视觉感受。 展开更多
关键词 红外图像 可见光图像 图像融合 注意力机制 编码解码结构
下载PDF
基于特征相似性的红外与可见光图像融合方法
13
作者 秦伟 段俊阳 《激光杂志》 CAS 北大核心 2024年第2期119-123,共5页
单一图像无法全面描述目标的信息,实际应用价值低,针对当前红外与可见光图像融合方法存在的一些不足,如:融合质量差等,为了获得更加理想的红外与可见光图像融合效果,提出了基于特征相似性的红外与可见光图像融合方法。首先分析当前红外... 单一图像无法全面描述目标的信息,实际应用价值低,针对当前红外与可见光图像融合方法存在的一些不足,如:融合质量差等,为了获得更加理想的红外与可见光图像融合效果,提出了基于特征相似性的红外与可见光图像融合方法。首先分析当前红外与可见光图像融合的研究进展,指出各种方法的局限性,然后采用红外图像和可见光图像,并对它们进行图像去噪、增强处理,采用卷积神经网络提取红外与可见光图像的特征,最后根据特征相似性进行红外与可见光图像融合,并对红外与可见光图像融合效果进行了测试,结果表明,本方法提升了红外与可见光图像融合质量,融合效果要明显优于其他红外与可见光图像融合方法。 展开更多
关键词 卷积神经网络 红外图像 可见光图像 图像融合 图像质量
下载PDF
基于三分支对抗学习和补偿注意力的红外和可见光图像融合
14
作者 邸敬 任莉 +2 位作者 刘冀钊 郭文庆 廉敬 《红外技术》 CSCD 北大核心 2024年第5期510-521,共12页
针对现有深度学习图像融合方法依赖卷积提取特征,并未考虑源图像全局特征,融合结果容易产生纹理模糊、对比度低等问题,本文提出一种基于三分支对抗学习和补偿注意力的红外和可见光图像融合方法。首先,生成器网络采用密集块和补偿注意力... 针对现有深度学习图像融合方法依赖卷积提取特征,并未考虑源图像全局特征,融合结果容易产生纹理模糊、对比度低等问题,本文提出一种基于三分支对抗学习和补偿注意力的红外和可见光图像融合方法。首先,生成器网络采用密集块和补偿注意力机制构建局部-全局三分支提取特征信息。然后,利用通道特征和空间特征变化构建补偿注意力机制提取全局信息,更进一步提取红外目标和可见光细节表征。其次,设计聚焦双对抗鉴别器,以确定融合结果和源图像之间的相似分布。最后,选用公开数据集TNO和RoadScene进行实验并与其他9种具有代表性的图像融合方法进行对比,本文提出的方法不仅获得纹理细节更清晰、对比度更好的融合结果,而且客观度量指标优于其他先进方法。 展开更多
关键词 红外可见光图像融合 局部-全局三分支 局部特征提取 补偿注意力机制 对抗学习 聚焦双对抗鉴别器
下载PDF
基于多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合
15
作者 邸敬 梁婵 +2 位作者 任莉 郭文庆 廉敬 《红外技术》 CSCD 北大核心 2024年第7期754-764,共11页
针对目前红外与可见光图像融合存在特征提取不足、融合图像目标区域不显著、细节信息缺失等问题,提出了一种多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合方法。首先,设计了多尺度对比度增强模块,以增强目标区域强度... 针对目前红外与可见光图像融合存在特征提取不足、融合图像目标区域不显著、细节信息缺失等问题,提出了一种多尺度对比度增强和跨维度交互注意力机制的红外与可见光图像融合方法。首先,设计了多尺度对比度增强模块,以增强目标区域强度信息利于互补信息的融合;其次,采用密集连接块进行特征提取,减少信息损失最大限度利用信息;接着,设计了一种跨维度交互注意力机制,有助于捕捉关键信息,从而提升网络性能;最后,设计了从融合图像到源图像的分解网络使融合图像包含更多的场景细节和更丰富的纹理细节。在TNO数据集上对提出的融合框架进行了评估实验,实验结果表明本文方法所得融合图像目标区域显著,细节纹理丰富,具有更优的融合性能和更强的泛化能力,主观性能和客观评价优于其他对比方法。 展开更多
关键词 红外与可见光图像融合 多尺度对比度增强 跨模态交互注意力机制 分解网络
下载PDF
基于多层卷积的红外与可见光图像融合算法
16
作者 陈海秀 房威志 +3 位作者 陆康 陆成 黄仔洁 陈子昂 《电光与控制》 CSCD 北大核心 2024年第9期12-17,44,共7页
针对复杂背景下纹理细节信息丢失、融合图像视觉感受较差等问题,提出了一种基于多层卷积的红外与可见光图像融合算法。该算法的网络框架分为编码器、解码器和融合网络3个部分。在编码器中引入高效通道注意力机制对源图像进行编码处理,... 针对复杂背景下纹理细节信息丢失、融合图像视觉感受较差等问题,提出了一种基于多层卷积的红外与可见光图像融合算法。该算法的网络框架分为编码器、解码器和融合网络3个部分。在编码器中引入高效通道注意力机制对源图像进行编码处理,融合多层卷积块、梯度卷积块、下采样卷积块以及卷积空间通道注意力机制等形成多层卷积融合网络(MCFN),通过该融合网络进行特征融合,利用解码器重建输出融合图像。选取了5种现有算法与所提算法用8种客观评价指标在两种数据集上进行比较,结果表明,所提算法融合后的图像目标突出、细节清晰、轮廓明显、指标提升显著,符合人体视觉感受。 展开更多
关键词 图像融合 红外图像 可见光图像 多层卷积 融合网络 注意力机制
下载PDF
基于预训练固定参数和深度特征调制的红外与可见光图像融合网络
17
作者 徐少平 周常飞 +2 位作者 肖建 陶武勇 戴田宇 《电子与信息学报》 EI CAS CSCD 北大核心 2024年第8期3305-3313,共9页
为了更好地利用红外与可见光图像中互补的图像信息,得到符合人眼感知特性的融合图像,该文采用两阶段训练策略提出一种基于预训练固定参数和深度特征调制的红外与可见光图像融合网络(PDNet)。具体地,在自监督预训练阶段,以大量清晰的自... 为了更好地利用红外与可见光图像中互补的图像信息,得到符合人眼感知特性的融合图像,该文采用两阶段训练策略提出一种基于预训练固定参数和深度特征调制的红外与可见光图像融合网络(PDNet)。具体地,在自监督预训练阶段,以大量清晰的自然图像分别作为U型网络结构(UNet)的输入和输出,采用自编码器技术完成预训练。所获得编码器模块能有效提取输入图像的多尺度深度特征功能,而解码器模块则能将其重构为与输入图像差异极小的输出图像;在无监督融合训练阶段,将预训练编码器和解码器模块的网络参数保持固定不变,而在两者之间新增包含Transformer结构的融合模块。其中,Transformer结构中的多头自注意力机制能对编码器分别从红外和可见光图像提取到的深度特征权重进行合理分配,从而在多个尺度上将两者融合调制到自然图像深度特征的流型空间上来,进而保证融合特征经解码器重构后所获得融合图像的视觉感知效果。大量实验表明:与当前主流的融合模型(算法)相比,所提PDNet模型在多个客观评价指标方面具有显著优势,而在主观视觉评价上,也更符合人眼视觉感知特点。 展开更多
关键词 红外与可见光图像 图像融合 自监督预训练 无监督融合训练 固定参数 深度特征调制
下载PDF
基于AGF和CNN的红外与可见光图像融合
18
作者 杨艳春 杨万轩 雷慧云 《激光与红外》 CAS CSCD 北大核心 2024年第7期1141-1148,共8页
针对红外与可见光图像融合中出现的边缘模糊和细节丢失等问题,本文提出了一种基于交替引导滤波器(AGF)与掩膜引导卷积神经网络(CNN)的融合算法。首先,将源图像通过交替引导滤波分解为基础层与细节层;然后,将基础层通过能量属性的融合规... 针对红外与可见光图像融合中出现的边缘模糊和细节丢失等问题,本文提出了一种基于交替引导滤波器(AGF)与掩膜引导卷积神经网络(CNN)的融合算法。首先,将源图像通过交替引导滤波分解为基础层与细节层;然后,将基础层通过能量属性的融合规则得到基础融合图像,细节层在基于掩膜引导的损失函数的指导下,通过卷积神经网络得到融合后的细节图像;最后,将基础融合图像与细节融合图像相加得到最终融合图像;实验结果表明,本文方法能够在突出显著热目标的同时保留丰富的背景边缘纹理信息,在客观评价指标上相较对比方法取得了更好的效果,证明了本文算法的优越性。 展开更多
关键词 图像处理 红外与可见光图像 交替引导滤波 卷积神经网络 图像融合
下载PDF
基于快速联合双边滤波器和改进PCNN的红外与可见光图像融合
19
作者 杨艳春 雷慧云 杨万轩 《红外技术》 CSCD 北大核心 2024年第8期892-901,共10页
针对红外与可见光图像融合结果中细节丢失、目标不显著和对比度低等问题,提出了一种结合快速联合双边滤波器(fast joint bilateral filter,FJBF)和改进脉冲耦合神经网络(pulse coupled neural network,PCNN)的红外与可见光图像融合方法... 针对红外与可见光图像融合结果中细节丢失、目标不显著和对比度低等问题,提出了一种结合快速联合双边滤波器(fast joint bilateral filter,FJBF)和改进脉冲耦合神经网络(pulse coupled neural network,PCNN)的红外与可见光图像融合方法,在保证融合图像质量的前提下有效提高运行效率。首先,利用快速联合双边滤波器对源图像进行分解;其次,为了更好地提取图像中显著结构和目标信息,针对基础层图像采用一种基于视觉显著图(visual significance map,VSM)的加权平均融合规则,针对细节层图像采用改进脉冲耦合神经网络模型进行融合,其中PCNN的所有参数都可以根据输入波段自适应调节;最后,将基础层融合图与细节层融合图叠加重构得到融合图像。实验结果表明,该方法提高了融合图像的效果,有效地保留了目标、背景细节和边缘等重要信息。 展开更多
关键词 图像处理 快速联合双边滤波器 脉冲耦合神经网络 红外与可见光图像 图像融合
下载PDF
多尺度和卷积注意力相结合的红外与可见光图像融合
20
作者 祁艳杰 侯钦河 《红外技术》 CSCD 北大核心 2024年第9期1060-1069,共10页
针对红外与可见光图像融合时,单一尺度特征提取不足、红外目标与可见光纹理细节丢失等问题,提出一种多尺度和卷积注意力相结合的红外与可见光图像融合算法。首先,设计多尺度特征提取模块和可变形卷积注意力模块相结合的编码器网络,多感... 针对红外与可见光图像融合时,单一尺度特征提取不足、红外目标与可见光纹理细节丢失等问题,提出一种多尺度和卷积注意力相结合的红外与可见光图像融合算法。首先,设计多尺度特征提取模块和可变形卷积注意力模块相结合的编码器网络,多感受野提取红外与可见光图像的重要特征信息。然后,采用基于空间和通道双注意力机制的融合策略,进一步融合红外和可见光图像的典型特征。最后,由3层卷积层构成解码器网络,用于重构融合图像。此外,设计基于均方误差、多尺度结构相似度和色彩的混合损失函数约束网络训练,进一步提高融合图像与源图像的相似性。本算法在公开数据集上与7种图像融合算法进行比较,在主观评价和客观评价方面,所提算法相较其它对比算法具有较好的边缘保持性、源图像信息保留度,较高的融合图像质量。 展开更多
关键词 红外与可见光图像 混合损失函数 多尺度特征提取 注意力机制 图像融合
下载PDF
上一页 1 2 19 下一页 到第
使用帮助 返回顶部