摘要
针对光学与SAR影像间存在非线性辐射畸变导致匹配正确率低的问题,提出线性注意力机制优化的光学与SAR影像匹配方法。该方法利用SuperPoint的描述分支将融合像素级的梯度方向直方图特征进行特征图维度重构,在维度层面增强特征的语义识别能力;在特征匹配阶段,利用线性注意力机制对SuperGlue算法进行优化,同时采用单样本分类准确性约束、全样本全局一致性约束以及局部范围结构一致等多种约束,构建多尺度损失函数进行训练,增强不同尺度下错误匹配的特征区分。利用6组光学与SAR遥感影像进行实验对比,结果表明,该方法相比HOPC、AWOG、CMM-Net及SuperGlue方法,匹配正确率、匹配效率均有较大提升。
Addressing the issue of low matching accuracy caused by nonlinear radiation distortion between optical and SAR images,an optimization method for optical and SAR image matching using linear attention mechanism is proposed.This method utilizes the SuperPoint description branch to fuse pixel-level gradient direction histogram features for feature map dimensional reconstruction,enhancing the semantic recognition ability of features at the dimensional level.In the feature matching stage,the linear attention mechanism is used to optimize the SuperGlue algorithm,and various constraints such as single-sample classification accuracy constraint,full-sample global consistency constraint,and local range structural consistency are adopted.A multi-scale loss function is constructed for training,enhancing the feature differentiation of mismatched features at different scales.Experiments conducted using six sets of optical and SAR remote sensing images demonstrate that this method,compared with HOPC,AWOG,CMM-Net,and SuperGlue methods,significantly improves both matching accuracy and efficiency.
作者
何巧
HE Qiao(Guangzhou City Polytechnic,Guangzhou 510000,China)
出处
《遥感信息》
CSCD
北大核心
2024年第5期171-178,共8页
Remote Sensing Information
关键词
影像匹配
灰度信息
特征融合
线性注意力机制
多尺度损失函数
image matching
grayscale information
feature fusion
linear attention mechanism
multiscale loss function