期刊文献+

基于改进坐标注意力和U-Net网络的高分辨率遥感图像建筑物提取

Building Extraction from High-Resolution Remote Sensing Images Based on Improved Coordinate Attention and U-Net Network
下载PDF
导出
摘要 在城市规划、统计调查和灾害应急评估等领域,从遥感图像中准确提取建筑物至关重要。然而,由于高分辨率遥感图像中建筑形态的多样性和地面环境的复杂性,实现建筑的完整、高精度提取仍然是一个挑战。为此,本文提出了一种用于从高分辨率遥感图像中提取建筑物的新网络,该网络保留了U-Net的编码器–解码器结构,并融合了坐标自注意模块(CSAM),以调整网络对输入图像中不同区域的关注程度,使得网络能够有选择性地捕捉和强调重要的语义信息,增强特征提取能力。在空间分辨率为0.3 m的WHU建筑物数据集上进行的实验结果表明,与U-Net、PSPNet、DeepLabV3+相比,所提出的网络能够获得更准确的建筑提取结果,达到98.21%的像素精度、95.28%的精准率、94.57%的召回率和90.34%的交并比。 Accurately extracting buildings from remote sensing images is crucial in areas such as urban plan-ning, statistical surveys, and disaster emergency assessment. However, due to the diversity of building forms and the complexity of ground environment in high-resolution remote sensing imag-es, achieving complete and high-precision extraction of buildings remains a challenge. Therefore, this paper proposes a new network for extracting buildings from high-resolution remote sensing images, which retains the encoder decoder structure of U-Net and integrates a Coordinate Self At-tention Module (CSAM) to adjust the network’s attention to different regions in the input image, enabling the network to selectively capture and emphasize important semantic information and enhance feature extraction capabilities. The experimental results on the WHU building dataset with a spatial resolution of 0.3 m show that the proposed network can achieve more accurate building extraction results compared to U-Net, PSPNet, and DeepLabV3+, achieving pixel accuracy of 98.21%, accuracy of 95.28%, recall of 94.57%, and intersection to union ratio of 90.34%.
作者 陈康
出处 《应用数学进展》 2024年第3期891-899,共9页 Advances in Applied Mathematics
  • 相关文献

参考文献3

二级参考文献20

共引文献41

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部