摘要
道路场景语义分割是自动驾驶环境感知的一项重要任务。近年来,变换神经网络(Transformer)在计算机视觉领域开始应用并取得了很好的效果。针对复杂场景图像语义分割精度低、细小目标识别能力不足等问题,本文提出了一种基于移动窗口Transformer的多尺度特征融合的道路场景语义分割算法。该网络采用编码-解码结构,编码器使用改进后的移动窗口Transformer特征提取器对道路场景图像进行特征提取,解码器由注意力融合模块和特征金字塔网络构成,充分融合多尺度的语义特征。在Cityscapes城市道路场景数据集上进行验证测试,实验结果表明,与多种现有的语义分割算法进行对比,本文方法在分割精度方面有较大的提升。
Road scene semantic segmentation is a crucial task in autonomous driving environment perception.In recent years,Transformer neural networks have been applied in the field of computer vision and have shown excellent performance.Addressing issues such as low semantic segmentation accuracy in complex scene images and insufficient recognition capabilities for small objects,this paper proposes a road scene semantic segmentation algorithm based on Swin Transformer with multiscale feature fusion.The network adopts an encoder-decoder structure,where the encoder utilizes an improved Swin Transformer feature extractor for road scene image feature extraction.The decoder consists of an attention fusion module and a feature pyramid network,effectively integrating semantic features at multiple scales.Validation tests on the Cityscapes urban road scene dataset show that,compared to various existing semantic segmentation algorithms,our approach demonstrates significant improvement in segmentation accuracy.
作者
杭昊
黄影平
张栩瑞
罗鑫
Hang Hao;Huang Yingping;Zhang Xurui;Luo Xin(School of Optical-Electrical and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China)
出处
《光电工程》
CAS
CSCD
北大核心
2024年第1期100-112,共13页
Opto-Electronic Engineering
基金
国家自然科学基金资助项目(62276167)。