摘要
深度信息的获取是场景解析中是非常重要的环节,主要分为传感器获取与图像处理两种方法;传感器技术对环境要求很高,因此图像处理为更通用的方法;传统的方法通过双目立体标定,利用几何关系得到深度,但仍因为环境因素限制诸多;因此,作为最贴近实际情况的方法,单目图像深度估计具有极大研究价值;为此,针对单目图像深度估计,提出了一种基于DenseNet的单目图像深度估计方法,该方法利用多尺度卷积神经网络分别采集全局特征、局部特征;加入了DenseNet结构,利用DenseNet强特征传递、特征重用等特点,优化特征采集过程;通过NYU Depth V2数据集上验证了模型的有效性,实验结果表明,该方法的预测结果平均相对误差为0.119,均方根误差为0.547,对数空间平均误差为0.052。
Depth estimation is an important part in scene analysis,mainly composed of senor acquisition and image processing.Sensor technology requires demanding environment,thus image processing is a more general approach.The traditional way is using binocular stereo calibration to obtain depth information by geometric calculations,but it is still limited by environment factors.Therefore,as the closest approach to the actual situation,depth estimation from single monocular images has great research value.Consequently,a DensetNet based method is proposed,the method use multi-scale convolutional neural networks to acquire global features and local features.At the same time,it joins DenseNet structure for strong features propagation、features reuse to optimize features gathering.The experiments on NYU Depth V2 dataset demonstrate the effectiveness of this method.The average relative error of the prediction of this method is 0.119,the root mean squared error is 0.547,and the average log 10 error is 0.052.
作者
何通能
尤加庚
陈德富
He Tongneng;You Jiageng;Chen Defu(College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China)
出处
《计算机测量与控制》
2019年第2期233-236,共4页
Computer Measurement &Control