摘要
在无人驾驶技术中,道路场景语义分割是一个非常重要的环境感知任务。传统的基于深度学习方法需要大量像素级标注样本,限制了应用范围。本文提出一种基于循环生成对抗网络的道路场景语义分割方法,无需成对数据也可实现图像语义分割,降低对数据集的要求;使用L2范数和最小二乘损失方法解决训练过程中出现的模式崩溃现象,增加了训练过程的稳定性,并提高了图像分割的质量。为了验证本文方法的有效性,在常用的道路场景数据集进行实验,结果显示该方法的分割精确度有明显提高。
In driverless technology, semantic segmentation of road scenes is a very important environmental perception task. Traditional deep learning methods require a large number of pixel-level annotation samples, which limits the scope of application. In this paper, we proposed a semantic segmentation method for road scenes based on cycle-consistent adversarial network. Image semantic segmentation was realized and the dataset requirements were reduced without paired data. Meanwhile,the collapse of mode during training was solved, and the stability of the training process and the quality of generated images was improved by using L2 norm and least square loss method. In order to validate the effectiveness of the proposed method,experiments were performed on commonly used road scene datasets. The results show that the accuracy of the proposed method is obviously improved.
作者
李智
张娟
方志军
黄勃
姜晓燕
黄正能
LI Zhi;ZHANG Juan;FANG Zhijun;HUANG Bo;JIANG Xiaoyan;HWANG Jenq-Neng(School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China;Department of Electrical Engineering, University of Washington, Seattle 98195,Washington,USA)
出处
《武汉大学学报(理学版)》
CAS
CSCD
北大核心
2019年第3期303-308,共6页
Journal of Wuhan University:Natural Science Edition
基金
国家自然科学基金(61702322,61772328,61801288)
关键词
无人驾驶
道路场景语义分割
深度学习
循环生成对抗网络
driverless
semantic segmentation of road scenes
deep learning
cycle-consistent adversarial network