期刊文献+

基于改进双目视觉算法的三维重建研究

Research on 3D reconstruction based on improved binocular vision algorithm
原文传递
导出
摘要 为解决现有立体匹配算法在图像弱纹理等区域鲁棒性差以及模型参数较大的问题,对PSMNet立体匹配方法进行改善,通过使用空洞空间卷积池化金字塔结构(atrous spatial pooling pyramid, ASPP)提取图像在不同尺度下的空间特征信息。随后引入通道注意力机制,给予不同尺度的特征信息相应的权重。融合以上信息构建匹配代价卷,利用沙漏形状的编解码网络对其进行规范化操作,从而确定特征点在各种视差情况下的相互对应关系,最后采用线性回归的方法得到相应的视差图。与PSMNet相比,该研究在SceneFlow和KITTI2015数据集里的误差率各自减少了14.6%和11.1%,且计算复杂度下降了55%。相比较于传统算法,可以改善视差图精度,提升三维重建点云数据质量。 To address the issues of poor robustness and large model parameters in existing stereo matching algorithms in areas such as weak texture images,the PSMNet stereo matching method is improved by using an atrous spatial convolutional pooling pyramid structure(ASPP)to extract spatial feature information of images at different scales.Subsequently,a channel attention mechanism is introduced to assign corresponding weights to feature information at different scales.The above information is integrated to construct a matching cost volume,an hourglass shaped encoding and decoding network is used to standardize it,and determine the correspondence between feature points in various disparity situations.Finally,the linear regression is used to obtain the corresponding disparity map.Compared with PSMNet,the error rates of this study in the SceneFlow and KITTI2015 datasets are reduced by 14.6%and 11.1%respectively,and the computational complexity is reduced by 55%.Compared with traditional algorithms,it can improve the accuracy of disparity maps and enhance the quality of 3D reconstructed point cloud data.
作者 邹家豪 赵燕东 ZOU Jiahao;ZHAO Yandong(School of Technology,Beijing Forestry University,Beijing 100083,China)
出处 《光电子.激光》 CAS CSCD 北大核心 2024年第7期699-707,共9页 Journal of Optoelectronics·Laser
基金 中国博士后科学基金(2022T150055) 北京市共建资助项目。
关键词 双目视觉 立体匹配 点云 三维重建 binocular vision stereo matching point cloud 3D reconstruction
  • 相关文献

参考文献10

二级参考文献61

  • 1王金岩,史文华,敬忠良.基于Depth from Focus的图像三维重建[J].南京航空航天大学学报,2007,39(2):181-186. 被引量:7
  • 2Radwell N, Boukhet M A, Franke-Arnold S. 3D beam re- construction by fluores-cence imaging F-J~. Optics Ex- press, 2013,21(19) : 22215-22220.
  • 3Saxena A, Min S,Ng A Y. Make 3D: Learning 3D scene structure from a single still image [J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2009,31(5)~824-840.
  • 4Furukawa Y, Ponce J Accurate, dense, and robust multi-view stereopsis[J]. IEEE Transaction on Pattern Analysis and Machine Intelligen, 2010,32(8) : 1362-1376.
  • 5Levoy M. Light fields and computational imaging[J]. Oomputer, 2006,39(8) :46-55.
  • 6Levoy M,Ohen B,Vaish V,et al. Synthetic aperture con- focal imaging[J]. ACM Transactions on Graphics, 2004, 23(3) :822-831.
  • 7Levoy M, Ng R, Adams A. Light field microscopy[A]. Proc of SIGGRAPH[C]. 2006,25(3) :924-934.
  • 8Levoy M, Hanrahan P. Light field renderingEA~. Proc. of the 23rd Annual Oonference Oomputer Graphics and In- teractive Techniques, A[M]-C~. 1996,31-42: 237199.
  • 9Ng R. Digital light field photogiaphyI-D~, Stanford Univer- sity,2006.
  • 10薄雪峰,全海英,刘志成,谭国庆,何玉.被动式立体视觉研究进展[J].北京生物医学工程,2008,27(5):553-558. 被引量:3

共引文献65

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部