摘要
目前遥感影像跨视角匹配技术无法直接使用大幅卫星影像进行匹配,难以满足大范围复杂场景匹配的任务需求,且依赖大规模数据集,不具备良好的泛化能力。针对上述问题,本文在质量感知模板匹配方法的基础上结合多尺度特征融合算法,提出一种基于视角转换的跨视角遥感影像匹配方法。该方法首先利用手持摄影设备采集地面多视影像,经密集匹配生成点云数据,利用主成分分析法拟合最佳地平面并进行投影变换,以实现地面侧视视角到空视视角的转换;然后设计了特征融合模块对VGG19网络从遥感影像中提取的低、中、高尺度特征进行融合,以获取遥感影像丰富的空间信息和语义信息;最后利用质量感知模板匹配方法将从视角转换后的地面影像上提取的特征与遥感影像的融合特征进行匹配,获取匹配的软排名结果,并采用非极大值抑制算法从中筛选出高质量的匹配结果。实验结果表明,在不需要大规模数据集的情况下本文方法具有较高的准确性和较强的泛化能力,平均匹配成功率为64.6%,平均中心点偏移量为5.9像素,匹配结果准确完整,可为大场景跨视角影像匹配任务提供一种新的解决方案。
At present,the cross-view matching technology of remote sensing images cannot directly use large-scale satellite images for matching,which is difficult to meet the requirements of large-scale complex scene matching tasks and relies on large-scale datasets,thus lacking a good generalization ability.Aiming at the above problems,this paper proposes a cross-view remote sensing image matching method based on visual transformation using the quality-aware template matching method combined with the multi-scale feature fusion algorithm.In this method,the ground multi-view images are collected by using handheld photographic equipment.The portability and flexibility of the handheld photographic equipment can make it easier for us to collect multi-view images covering the target area.The acquired images are densely matched to generate point cloud data,and principal component analysis is used to fit the best ground plane and perform projection transformation to realize the conversion from the ground side view to the aerial view.Then,a feature fusion module is designed for the VGG19 network.The low,medium and high-level features extracted from remote sensing images are fused to obtain rich spatial and semantic information of remote sensing images.The fusion features of semantic information and spatial information can resist large-scale differences.Finally,the quality-aware template matching method is used.The features extracted from the ground images are matched with the fusion features of the remote sensing images.The matching soft ranking results are obtained,and the non-maximum suppression algorithm is used to select high-quality matching results.The experimental results show that the method proposed in this paper has a high accuracy and strong generalization ability without the need of large-scale datasets.The average matching success rate is 64.6%,and the average center point offset is 5.9 pixels.The matching results are accurate and complete,which provide a new solution for the task of cross-view image matching in large scenes.
作者
饶子昱
卢俊
郭海涛
余东行
侯青峰
RAO Ziyu;LU Jun;GUO Haitao;YU Donghang;HOU Qingfeng(Institute of Geospatial Information,Strategic Support Force Information Engineering University,Zhengzhou 450001,China)
出处
《地球信息科学学报》
CSCD
北大核心
2023年第2期368-379,共12页
Journal of Geo-information Science
基金
国家自然科学基金项目(41601507)
基础加强计划技术领域基金项目(2019-JCJQ-JJ-126)。
关键词
遥感影像
跨视角匹配
视角转换
多尺度特征
特征融合
模板匹配
泛化性
remote sensing image
cross-view matching
viewpoint transformation
multi-scale feature
feature fusion
template matching
generalization