摘要
基于连续表示的图像超分辨可以实现任意倍数的图像分辨率缩放,目前已成为当前该领域研究的主流趋势。隐式神经表示方法将坐标信息与深度特征信息作为输入,给定坐标下的RGB值(红绿蓝值)作为输出,提供了构建局部连续表示的基本框架,是典型的连续表示方法。然而,隐式神经表示方法未能充分考虑图像的局部结构信息。为此,提出了基于权重学习和注意力机制的隐式神经表示方法。首先,引入权重学习模块,该模块借助梯度信息和多层感知机学习临近特征点的权重。其次,引入通道注意力机制,以此增强特征通道中的关键信息,提升图像局部连续表示的准确性。数值实验结果表明,通过这两种机制的共同作用,该算法的性能相较于现有算法有了显著提升,表现出强大的竞争力。
Image super-resolution based on continuous representations can achieve arbitrary multiples of image resolution scaling and has now become the mainstream of image super-resolution research.The implicit neural representation method uses coordinate information along with depth feature information as inputs,and the RGB values(red,green,blue)under given coordinates as outputs,thus providing a fundamental framework for constructing local continuous representations,which is a typical method of continuous representation.However,implicit neural representation methods fail to fully consider the local structural information of images.To address this,an implicit neural representation method based on weight learning and attention mechanism is proposed.First,a weight learning module is introduced,which utilizes gradient information and a multilayer perceptron to learn the weights of neighbouring feature points.Second,a channel attention mechanism is introduced to enhance key information within the feature channels,improving the accuracy of local continuous representations of images.Numerical experimental results demonstrate that the combined effect of these two mechanisms significantly enhances the performance of the algorithm compared to existing methods,displaying strong competitiveness.
作者
谭明旺
张选德
TAN Mingwang;ZHANG Xuande(School of Electronic In formation and Artificial Intelligence,Shaanxi University of Science&Technology,Xi'an 710021,China)
出处
《软件工程》
2024年第11期20-24,共5页
Software Engineering
关键词
超分辨率重建
隐式神经表示
卷积神经网络
注意力机制
super-resolution reconstruction
implicit neural representation
convolutional neural networks
attention mechanism