期刊文献+

基于多尺度特征融合的单目图像深度估计 被引量:15

Monocular depth estimation with multi-scale feature fusion
原文传递
导出
摘要 为解决从单目图像中很难恢复出准确、有效深度信息的问题,提出一种多尺度特征融合的单目图像深度估计算法.算法采用端对端训练的卷积神经网络(CNN)结构,引入从图像编码器到解码器的跳层连接来实现在不同尺度上特征的提取和表达,设计了一种多尺度的损失函数来提升卷积神经网络的训练效果.通过在NYU Depth V2室内场景深度数据集和KITTI室外场景深度数据集上的训练、验证和测试,实验结果表明:提出的多尺度特征融合方法得到的深度图边缘清晰、层次分明,且在室内场景和室外场景中均能适用,具有较强的泛化性,可以适应多种实际场景的需求. To solve the problem that it is difficult to recover accurate and effective depth information from monocular images,a monocular image depth estimation algorithm based on multi-scale feature fusion was proposed.End-to-end trained convolutional neural network(CNN)structure was applied to the algorithm,and the skip layer connection from image encoder to decoder was introduced to realize feature extraction and expression on different scales.A multi-scale loss function was designed to improve the training effect of the convolutional neural network.Through training,verification and testing on the NYU Depth V2 indoor scene depth dataset and KITTI outdoor scene depth dataset,experimental results show that the proposed multi-scale feature fusion method can obtain clear,sharp-edged edges in the depth map,and is applicable to both indoor and outdoor scenes with strong generalization,which can adapt to the demands of a variety of actual scenes.
作者 王泉德 张松涛 WANG Quande;ZHANG Songtao(School of Electrical Information,Wuhan University,Wuhan 430072,China)
出处 《华中科技大学学报(自然科学版)》 EI CAS CSCD 北大核心 2020年第5期7-12,共6页 Journal of Huazhong University of Science and Technology(Natural Science Edition)
基金 国家自然科学基金青年基金资助项目(61701351)。
关键词 计算机视觉 深度学习 卷积神经网络 单目图像深度估计 多尺度特征融合 computer vision deep learning convolutional neural network monocular depth estimation multi-scale feature fusion
  • 相关文献

参考文献3

二级参考文献37

  • 1Felzenszwalb P F, Huttenlocher D P. Efficient belief propaggation for early vision[J]. Int'l J Computer Vision, 2006, 70(1): 41-54.
  • 2Wang Z F, Zheng Z G. A region based stereo matching algorithm using cooperative optimization[C]// Proc IEEE CS Conf Comoputer Vision and Pattern Recognition. Anchorage: IEEE, 2008: 1-8.
  • 3Tappen M F, Freeman W T. Comparison of graph cuts with belief propagation for stereo[C]//Proc IEEE Int' l Conf Computer Vision. Nice: IEEE, 2003 : 900-906.
  • 4Kolmogorov V, Zabih R. Graph cut algorithms for binocular stereo with occlusions[M]// Mathematical Models in Computer Vision: The Handbook. New York: Springer-Verlag, 2005.
  • 5Birchfield S, Tomasi C. A pixel dissimilarity measure that is insensitive to image sampling[J]. IEEE Trans Pattern Analysis and Machine Intelligence, 1998, 20(4): 401-406.
  • 6国际立体匹配算法研究社区.所有提交算法的最新性能评估报告[EB/OL].[2009-03-21].http:∥vision.middlebury.edu/stereo/eval,200.
  • 7Zhu Qingbo, Wang Hongyuan, Tian Wen. A practical new approach to 3D scene reeovery[J]. Signal Processing, 2009, 89(11): 2 152-2 158.
  • 8Scharstein D, Szeliski R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms [J]. Int'l J Computer Vision, 2002, 47(1-3) : 7-42.
  • 9Yoon K J, Kwen I S. Adaptive support-weight approach for correspondence search[J]. IEEE Trans Pattern Analysis and Machine Intelligence, 2006, 28 (4) : 650-656.
  • 10Gerrits M, Bekaert P. Local stereo matching with segmentation-based outlier rejection[C]//Proe IEEE 3rd Canadian Conf Computer and Robot Vision. Quebec: IEEE, 2006: 1-7.

共引文献9

同被引文献78

引证文献15

二级引证文献24

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部