期刊文献+

轻量化自监督单目深度估计

Lightweight Self-supervised Monocular Depth Estimation
下载PDF
导出
摘要 目前,大多数的增强现实和自动驾驶应用不仅会使用到深度网络估计的深度信息,还会使用到位姿网络估计的位姿信息.将位姿网络和深度网络同时集成到嵌入式设备上,会极大地消耗内存.为解决这一问题,提出一种深度网络和位姿网络共用特征提取器的方法,使模型保持在一个轻量级的尺寸.此外,通过带有线性结构的深度可分离卷积轻量化深度网络,使网络在不丢失过多细节信息前提下还可获得更少的参数量.最后,通过在KITTI数据集上的实验表明,与同类算法相比,该位姿网络和深度网络参数量只有的35.33 MB.同时,恢复深度图的平均绝对误差也保持在0.129. Currently,most augmented reality and autonomous driving applications use not only the depth information estimated by the depth network but also the pose information estimated by the pose network.Integrating both the pose network and the depth network into an embedded device can be extremely memory-consuming.In view of this problem,a method of the depth and pose networks sharing feature extractors is proposed to keep the model at a lightweight size.In addition,the depth-separable convolutional lightweight depth network with linear structure allows the network to obtain fewer parameters without losing too much detailed information.Finally,experiments on the KITTI dataset show that compared with the algorithms of the same type,the size of the pose and deep network parameters is only 35.33 MB.At the same time,the average absolute error of the restored depth map is also maintained at 0.129.
作者 刘佳 林潇 陈大鹏 徐闯 石豪 LIU Jia;LIN Xiao;CHEN Da-Peng;XU Chuang;SHI Hao(School of Automation,Nanjing University of Information Science&Technology,Nanjing 210044,China;Jiangsu Province Engineering Research Center of Intelligent Meteorological Exploration Robot,Nanjing 210044,China;Jiangsu Key Laboratory of Big Data Analysis Technology,Nanjing 210044,China;Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology,Nanjing 210044,China)
出处 《计算机系统应用》 2023年第8期116-125,共10页 Computer Systems & Applications
基金 国家自然科学基金(61773219,62003169) 江苏产业前瞻与关键技术重点项目(BE2020006-2) 江苏省自然科学基金青年基金(BK20200823)。
关键词 深度学习 单目深度估计 自监督学习 轻量化 计算机视觉 deep learning monocular depth estimation self-supervised learning lightweight computer vision
  • 相关文献

参考文献4

二级参考文献80

  • 1朱淼良,姚远,蒋云良.增强现实综述[J].中国图象图形学报(A辑),2004,9(7):767-774. 被引量:201
  • 2姚远,朱淼良,卢广.增强现实场景光源的实时检测方法和真实感渲染框架[J].计算机辅助设计与图形学学报,2006,18(8):1270-1275. 被引量:4
  • 3Sato I, Sato Y, Ikeuchi K. Acquiring a radiance distribution tosuperimpose virtual objects onto a real scene[J]. IEEE Transactionson Visualization and Computer Graphics, 1999, 5(1): 1-12.
  • 4Azuma R T. A survey of augmented reality[J]. Presence Teleoperators& Virtual Environments, 1997, 6(4): 355-385.
  • 5van Krevelen D W F, Poelman R. A survey of augmented realitytechnologies, applications and limitations[J]. InternationalJournal of Virtual Reality, 2010, 9(2): 1-20.
  • 6Rohmer K, Buschel W, Dachselt R, et al. Interactive near-fieldillumination for photorealistic augmented reality on mobile devices[C] //Proceedings of IEEE International Symposium onMixed and Augmented Reality. Los Alamitos: IEEE ComputerSociety Press, 2014: 29-38.
  • 7Sato I, Sato Y, Ikeuchi K. Illumination from shadows[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,2003, 25(3): 290-300.
  • 8Panagopoulos A, Samaras D, Paragios N. Robust shadow andillumination estimation using a mixture model[C] //Proceedingsof IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos: IEEE Computer Society Press, 2009:651-658.
  • 9Panagopoulos A, Wang C H, Samaras D, et al. Illumination estimationand cast shadow detection through a higher-ordergraphical model[C] //Proceedings of IEEE Conference onComputer Vision and Pattern Recognition. Los Alamitos: IEEEComputer Society Press, 2011: 673-680.
  • 10Bingham M, Taylor D, Gledhill D, et al. Illuminant conditionmatching in augmented reality: a multi-vision, interest pointbased approach[C] //Proceedings of the 6th International Conferenceon Computer Graphics, Imaging and Visualization. LosAlamitos: IEEE Computer Society Press, 2009: 57-61.

共引文献41

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部