期刊文献+

多尺度特征提取的道路场景语义分割

SEMANTIC SEGMENTATION OF ROAD SCENE BASED ON MULTI-SCALE FEATURE EXTRACTION
下载PDF
导出
摘要 道路场景语义分割是自动驾驶系统的重要组成部分。道路场景中环境复杂、物体种类繁多且尺寸差异较大,已有的全卷积神经网络(FCN)特征提取能力不足,导致语义分割精度较低。对此,提出一种多尺度特征提取网络(Multi-scale Feature Extraction Network, MFNet),该网络采用并行的特征提取模块提取不同尺度下的不变特征,增强特征多样性,通过逐层的反卷积操作,将特征上采样恢复至原始图像大小,设计分级训练方法并优化loss函数。在多个公开数据集上对该算法进行评估,取得了良好的分割效果。 Semantic segmentation of road scenes is an important aspect of autonomous driving systems. The existing fully convolutional networks(FCN) has insufficient feature extraction ability in the context of complex environment, various types of objects and large size differences in road scenes, resulting in low semantic segmentation accuracy. We presented a multi-scale feature extraction network(MFNet). MFNet used parallel feature extraction module to extract invariant features at different scales to enhance the feature diversity, and then restored the feature of upsampling to the original image size by layer-by-layer deconvolution operation to obtain the segmentation result. Finally, we designed a hierarchical training method and optimized the loss function. We evaluate the proposed method on a number of public datasets and achieve a satisfactory segmentation effect.
作者 商建东 刘艳青 高需 Shang Jiandong;Liu Yanqing;Gao Xu(Supercomputing Center,Zhengzhou University,Zhengzhou 450052,Henan,China;School of Information Engineering,Zhengzhou University,Zhengzhou 450000,Henan,China)
出处 《计算机应用与软件》 北大核心 2021年第11期174-178,共5页 Computer Applications and Software
基金 国家重点研发计划项目(2018YFB0505004-03) 郑州大学2018年科研启动基金项目(32210919)。
关键词 语义分割 自动驾驶 神经网络 特征提取 反卷积 Semantic segmentation Autonomous driving Neural network Feature extraction Deconvolution
  • 相关文献

参考文献1

二级参考文献34

  • 1HUANCi Kaiqi, REN Weiqiang, TAN Ticniu. A reviewon image ohjcct classification and dcicction [J]. ChineseJournal of Computcrs,2014 , 37(6) : 1225-1240.
  • 2DENG J, DONG W, SOCHER R, ct al. Imagcnet: Alarge-scalc hierarchical image database [C]. IEEE Con-ference on Computer Vision and Pattern Recognition?2009: 248 - 255.
  • 3KRIZHP:VSKY A,SUTSKEVEK I, HINTON G E.Imagcnct classification with deep convolutional neuralnetworks [C]. Neural Information Processing Systems.2012: 1097- 1105.
  • 4EVERINGHAM M, ESLAMI S A, VAN GOOL U etal. The pascal visual object classes challenge: A retro-spective [J]. International Journal on Computer Vision,2014, 111(1): 98-136.
  • 5HAR1HARAN B, ARBP:LAEZ P, BOURDEV U ct al.Semantic contours from inverse detcctors [C]. IEF1E In-ternational Conference on Computer Vision,2011: 991-998.
  • 6MOTTAGHI K, C!IEN X,LIU X,et al. The role ofcontext for object dctcction and semantic segmentationin the wild [C]. IEEE Conference on Computer Visionand Pattern Recognition, 2014: 891 - 898.
  • 7CHEN X,MOTTAGHI R, LIU X,et al. Detect whatyou can: Detecting and representing objects using holis-tic models and body parts [C]. IEEE Conference onComputer Vision and Pattern Recognition, 2014 : 1971-1978.
  • 8WANG J? YUILLE A L. Semantic part segmentation u-sing compositional model combining shape and appear-ance [C]. IEEE Conference on Computer Vision andPattern Recognition, 2015 : 1788-1797.
  • 9LIANG X, LIU S,SHEN X,et al. Deep human parsingwith active template regression [J]. IEEE Transactionson Pattern Analysis and Machine Intelligence, 2015, 37(12): 2402 - 2414.
  • 10LIANG X,XU C,SHEN X,et al. Human parsingwith contextualized convolutional neural network [C].IEEE International Conference on Computer Vision,2015; 1386-1394.

共引文献43

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部