摘要
为解决遥感影像变化检测全局上下文信息捕获的问题,本文提出了基于孪生结构、跳跃连接结构及Transformer结构的TSU-Net。该模型编码器采用混合CNN-Transformers结构,借助自注意力机制捕获遥感影像的全局上下文信息,增强了模型对于像素级遥感影像变化检测任务的长距离上下文建模能力。该模型在LEVIR-CD数据集和CDD数据集进行测试,F1得分分别为90.73和93.14,优于各对比模型。
In order to solve the problem of global context information capture in remote sensing image change detection, this paper proposes TSU-Net based on twin structure, jump link structure and Transformer structure. The model encoder adopts a hybrid CNN-Transformers structure to capture the global context information of remote sensing images with the help of self-attention mechanism, which enhances the model’s ability of long distance context modeling for pixel-level remote sensing image change detection task. The model is tested in the LEVIR-CD dataset and the CDD dataset, and the F1 scores are 0.907 3 and 0.931 4, respectively, which are superior to the comparison models.
作者
冯炜明
张新长
孙颖
姜明
甘巧
侯幸幸
FENG Weiming;ZHANG Xinchang;SUN Ying;JIANG Ming;GAN Qiao;HOU Xingxing(School of Geography and Remote Sensing,Guangzhou University,Guangzhou 510006,China;School of Geography and Planning,Sun Yat-sen University,Guangzhou 510275,China)
出处
《测绘通报》
CSCD
北大核心
2022年第8期36-40,92,共6页
Bulletin of Surveying and Mapping
基金
国家重点研发计划(2018YFB2100702)
国家自然科学基金面上项目(42071441)
智慧广州时空信息云平台关键技术——时空大数据一体化整合与自适应动态更新模块研发与咨询服务项目(GZIT2016-A5-147)。