摘要
无人机(Unmanned aerial vehicle,UAV)遥感图像拼接是指将两幅或多幅具有相似场景内容的高分辨率无人机遥感图像拼接为一幅包含更多信息的大视野图像,在军事和地理测绘等领域得到了广泛应用。传统算法通常依赖于手工特征,无法有效地提取弱纹理图像的特征。若图像之间视差较大时,会导致拼接无法进行。为了解决上述问题,基于计算机视觉组(Visual Geometry Group-16,VGG-16)网络结合孪生网络框架提出了一种用于无人机遥感图像拼接的有监督模型。基于VGG-16网络设计了权值共享的孪生特征提取网络,解决特征提取不充分的问题。设计了能够回归图像之间空间变换关系的回归网络,并使用分组卷积代替普通卷积以提升网络速度。同时,为了解决将图像之间真实变换关系作为标签的图像拼接数据集难以获取的问题,基于一定程度的仿射变换,构建了自己的数据集。实验结果表明,本方法在无人机遥感图像拼接的主观视觉效果以及客观评价指标上均有较好的结果,与ORB算法(Oriented FAST and rotated BRIEF,ORB)和CAU-DHE算法(Content-aware unsupervised deep homography estimation,CAU-DHE)相比,主观视觉上图像拼接精度提升,结构相似性分别提高了约12.4%和2.3%,均方根误差分别降低了约15.0%和4.4%。
Unmanned aerial vehicle(UAV)remote sensing image stitching is to stitch images with certain overlapping areas,which has been widely used in military,geographic mapping and other fields in recent years.Traditional algorithms usually rely on manual features,which cannot effectively extract the features of weak texture images.It will also cannot stitch images with large parallax.To solve the above problems,a supervised model for UAV remote sensing image stitching was proposed,based on Visual Geometry Group⁃16(VGG⁃16)network combined with Siamese network.To adequately extract the feature of weak texture images,a Siamese feature extraction network with shared weights based on VGG⁃16 was designed.Secondly,a regression network capable of regressing the spatial transformation relationship between images was designed,in which the normal convolution was replaced by group convolution to improve the processing speed of network.Meanwhile,in order to solve the difficulty to obtain the image stitching dataset that used the real transformation relations between images as labels,datasets were constructed based on a certain degree of affine transformation.The experiment results indicate our method gets better result for UAV remote sensing images stitching in the subjective visual effect as well as objective evaluation indexes.Compared with the ORB+PROSAC algorithm and CAU⁃DHE algorithm,the image stitching accuracy is improved in subjective vision,the structural similarity is improved by 12.4%and 2.3%respectively,and the root mean square error is reduced by 15.0%and 4.4%respectively.
作者
李嘉诚
朱福珍
LI Jiacheng;ZHU Fuzhen(College of Electronic Engineering,Heilongjiang University,Harbin 150080,China)
出处
《黑龙江大学自然科学学报》
CAS
2023年第4期496-504,共9页
Journal of Natural Science of Heilongjiang University
基金
黑龙江省省属高等学校基本科研业务费项目(2022-KYYWF-1090)
国家自然科学基金(61601174)
黑龙江省博士后科研启动基金(LBH-Q17150)
黑龙江省普通高等学校电子工程重点实验室(黑龙江大学)开放课题资助及省高校科技创新团队资助项目(2012TD007)
黑龙江省省属高等学校基本科研业务费基础研究项目(KJCXZD201703)
黑龙江省自然科学基金(F2018026)。
关键词
无人机遥感图像
图像拼接
孪生网络
图像配准
UAV remote sensing image
image stitching
Siamese network
image registration