期刊文献+

GCN引导模型视点的光学遥感道路提取网络

Optical remote sensing road extraction network based on GCN guided model viewpoint
下载PDF
导出
摘要 在光学遥感图像中,道路易受遮挡物、铺装材料以及周围环境等多重因素的影响,导致其特征模糊不清。然而,现有道路提取方法即使增强其特征感知能力,仍在特征模糊区域存在大量误判。为解决上述问题,本文提出基于GCN引导模型视点的道路提取网络(RGGVNet)。RGGVNet采用编解码结构,并设计基于GCN的视点引导模块(GVPG)在编解码器的连接处反复引导模型视点,从而增强对特征模糊区域的关注。GVPG利用GCN信息传播过程具有平均特征权重的特性,将特征图中不同区域道路显著性水平作为拉普拉斯矩阵,参与到GCN信息传播从而实现引导模型视点。同时,提出密集引导视点策略(DGVS),采用密集连接的方式将编码器、GVPG和解码器相互连接,确保有效引导模型视点的同时缓解优化困难。在解码阶段设计多分辨率特征融合(MRFF)模块,最小化不同尺度道路特征在特征融合和上采样过程中的信息偏移和损失。在两个公开遥感道路数据集中,本文方法IoU分别达到65.84%和69.36%,F1-score分别达到79.40%和81.90%。从定量和定性两方面实验结果可以看出,本文所提方法性能优于其他主流方法。 In optical remote sensing images,roads are easily affected by multiple factors such as obstructions,pavement materials,and surrounding environments,resulting in blurred features.However,even if existing road extraction methods enhance their feature perception capabilities,they still suffer from a large number of misjudgments in feature-blurred areas.To address the above issues,this paper proposed the road extraction network based on GCN guided model viewpoint(RGGVNet).RGGVNet adopted the encoder-decoder structure and designed a GCN based viewpoint guidance module(GVPG)to repeatedly guide the model viewpoint at the connection of the encoder and decoder,thereby enhancing attention to feature blurred areas.GVPG took advantage of the fact that the GCN information propagation process had the characteristic of average feature weight,used the road salience levels in different areas as a Laplacian matrix,and participated in GCN information propagation to realize the guidance model perspective.At the same time,a dense guidance viewpoint strategy(DGVS)was proposed,which uses dense connections to connect the encoder,GVPG module,and decoder to each other to ensure effective guidance of model viewpoints while alleviating optimization difficulties.In the decoding stage,a m ulti-resolution feature fusion module(MRFF)was designed to minimize the information offset and loss of road features of different scales in the feature fusion and upsampling process.In two public remote sensing road datasets,the IoU of our method reached 65.84%and 69.36%,respectively,and the F1-score reached 79.40%and 81.90%,respectively.It can be seen from the quantitative and qualitative experimental results that the performance of our method is superior to other mainstream methods.
作者 刘光辉 单哲 杨塬海 王恒 孟月波 徐胜军 LIU Guanghui;SHAN Zhe;YANG Yuanhai;WANG Heng;MENG Yuebo;XU Shengjun(College of Information and Control Engineering,Xi'an University of Architecture and Technology,Xi'an 710055,China;Xi'an Key Laboratory of Intelligent Technology for Building and Manufacturing,Xi'an 710055,China)
出处 《光学精密工程》 EI CAS CSCD 北大核心 2024年第10期1552-1566,共15页 Optics and Precision Engineering
基金 陕西省重点研发计划项目(No.2021SF-429) 陕西省自然科学基础研究计划项目(No.2023-JC-YB-532)。
关键词 光学遥感图像 道路提取 深度神经网络 图卷积网络 optical remote sensing images road extraction deep neural network graph convolution network
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部