期刊文献+

强化位置感知的光学与SAR图像一体化配准方法 被引量:1

Integrated Registration Method with Enhanced Position Awareness for Optical and SAR Images
下载PDF
导出
摘要 图像配准是光学与SAR图像信息融合的基础。现有典型的配准方法大多依赖于特征点检测与匹配来实现,对不同场景区域的适用性较差,容易出现误匹配点多或有效同名点不足以致配准失效的情况。针对该问题,本文提出了一种强化位置感知的光学与SAR图像一体化配准方法,利用深度网络直接回归图像间的几何变换关系,在不依赖特征点检测与匹配的情况下,实现端到端的高精度配准。具体地,首先在骨干网络中利用融合坐标注意力的特征提取模块,捕获输入图像对中具有位置敏感性的细粒度特征;其次,融合骨干网络输出的多尺度特征,兼顾浅层特征的定位信息与高层特征的语义信息;最后提出联合位置偏差与图像相似性的损失函数优化配准结果。基于高分辨率光学与SAR图像配准公开数据集OS-Dataset的实验结果表明,与现有典型的OS-SIFT、RIFT2、DHN及DLKFM四种算法相比,所提方法对于城市、农田、河流、重复纹理及弱纹理等不同场景区域均具有良好的稳健性,在配准的目视效果以及定量的精度指标上均优于现有算法。其中平均角点误差小于3个像素的百分比与四种算法中精度最高的DLKFM相比提高了25%以上;配准速度与四种算法中最快的DHN基本相当,可实现高精度、高效率的光学与SAR图像配准。 Image registration is the basis for optical and SAR image information fusion.Most of the existing typical registration methods rely on feature-point detection and matching.However,because of their poor applicability to different scene regions,these methods are prone to problems such as excessive mismatched points and insufficiently effective matched points,resulting in invalid registration.Therefore,this study investigated an integrated registration method with enhanced position awareness for optical and SAR images.This method utilizes a deep neural network to directly regress the geometric transformation relationship between images.The proposed method achieves end-to-end highprecision registration without relying on feature-point detection.First,a feature-extraction module that integrates coordinate attention is used in the backbone network to extract position-sensitive fine-grained features from the input image pairs.Second,the multiscale features of the backbone network output are fused,taking into account the positional information of low-level features and semantic information of high-level features.Finally,a loss function that combines the position deviation and image similarity is used to optimize the registration results.Experimental results based on a publicly available high-resolution optical and SAR dataset(OS-Dataset)demonstrated that compared with four existing typical algorithms(OS-SIFT,RIFT2,DHN,and DLKFM),the proposed method had good robustness for different scene areas such as urban,farmland,river,repetitive texture,and weak texture scenes,and outperformed the existing algorithms in terms of visual effects and a quantitative precision metric.The percentage of average corner errors of fewer than 3 pixels was more than 25%better than that of DLKFM,which had the highest precision among the four algorithms.The registration speed was comparable to that of DHN,which was the fastest of the four algorithms.The proposed method could achieve high-precision and high-efficiency optical and SAR image registration.
作者 杨玉婷 赵凌君 赵路路 张晗 熊博莅 计科峰 YANG Yuting;ZHAO Lingjun;ZHAO Lulu;ZHANG Han;XIONG Boli;JI Kefeng(State Key Laboratory of CEMEE,College of Electronic Science and Technology,National University of Defense Technology,Changsha,Hunan 410073,China;Army 95369 of PLA,Foshan,Guangdong 528000,China;Northwest Institute of Nuclear Technology,Xi’an,Shaanxi 710024,China)
出处 《信号处理》 CSCD 北大核心 2024年第3期557-568,共12页 Journal of Signal Processing
基金 湖南省自然科学基金项目(2021JJ40684) 卫星信息智能处理与应用技术重点实验室自主研究基金项目(2022-ZZKY-JJ-10-02)。
关键词 图像配准 位置感知 多尺度特征 特征融合 图像相似性 image registration position awareness multi-scale feature feature fusion image similarity
  • 相关文献

参考文献4

二级参考文献54

共引文献132

同被引文献13

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部