摘要
提出了一种结合人眼视觉特性的自适应PCNN图像融合新方法,使用图像逐像素的局部对比度做为PC-NN对应神经元的链接强度,经过PCNN点火获得参与融合图像的点火映射图,再通过判决选择算子,选择各参与融合图像中的明显特征部分生成融合图像.该方法除几个主要参数外,其它参数如阈值调整常量等对于融合结果影响很小,解决了PCNN用于图像处理时参数多且调整困难的问题.实验结果表明,融合效果优于经典的小波变换方法和Laplacian塔型方法.
This paper proposes a new fusion algorithm based on the improved pulse coupled neural network(PCNN) model, the fundamental characteristics of images and the properties of human vision system. Compared with the traditional algorithm where the linking strength of each neuron has the same value and its value is chosen through experimentation, this algorithm uses the local contrast of each pixel as its value, so that the linking strength of each pixel can be chosen adaptively. After the processing of PCNN with the adaptive linking strength, new fire mapping images are obtained for each image taking part in the fusion. The clear objects of each original image are decided by the compare-selection operator with the fire mapping images pixel by pixel and then all of them are merged into a new clear image. Furthermore, by this algorithm, other parameters, for example, A, the threshold adjusting constant, only have a slight effect on the new fused image. It therefore overcomes the difficulty in adjusting parameters in PCNN. Experimental results indicate that the method outperforms the traditional approaches in preserving edge information while improving texture information.
出处
《计算机学报》
EI
CSCD
北大核心
2008年第5期875-880,共6页
Chinese Journal of Computers
基金
国家自然科学基金(60702063)
广西壮族自治区青年科学基金(桂科青0640067)资助
关键词
图像融合
脉冲耦合神经网络
人眼视觉系统
局部对比度
链接强度
点火映射图
image fusion
Pulse-Coupled Neural Network (PCNN)
human vision system
localcontrast
linking strength
fire mapping image