摘要
为了提高基于多视图立体匹配网络(MVSNet)的深度学习算法在弱纹理场景下的精准性,提出了一种基于多尺度空间特征融合的三维重建方法。首先利用金字塔特征提取网络(PFNet)训练图像序列的特征表示,得到特征图不同尺度之间的联系,使网络可以更好地捕获图像上下文信息;然后利用UGRU循环卷积网络计算出三维成本量,通过门控循环单元网络(GRU)和U-Net架构的结合使用,能够有效聚合不同尺度信息,并且提升弱纹理区域下的重建精度。使用DTU数据集进行验证,实验结果证明,相比目前先进方法 R-MVSNet,该方法在精度方面有较大提升,尤其体现在弱纹理区域。
A 3 D reconstruction algorithm based on multi-scale spatial feature fusion is proposed in this paper.Firstly,the feature representation of the image sequence is trained using Pyramid Feature Extraction Network to obtain the connection between different scales of the feature map,so that the network can better capture the image contextual information.Then UGRU recurrent convolutional network is used to calculate the 3 D cost volume,which can effectively aggregate information at different scales and improve the reconstruction accuracy under weak texture regions by using a combination of Gate Recurrent Unit network and U-Net architecture.In this paper,the algorithm is validated using the DTU dataset,and the experimental results demonstrate that the algorithm has a large improvement in accuracy compared with the current advanced method R-MVSNet,especially in the weak texture region.
出处
《工业控制计算机》
2021年第11期86-87,90,共3页
Industrial Control Computer
基金
上海市科委港澳台科技合作项目(18510760300)
中国博士后基金项目(2020M681264)资助。