期刊文献+

面向重复纹理及非刚性形变的像对高效稠密匹配方法 被引量:2

Efficient dense matching method for repeated texture and non-rigid deformation
原文传递
导出
摘要 目的像对稠密匹配是3维重建和SLAM(simultaneous localization and mapping)等高级图像处理的基础,而摄影基线过宽、重复纹理、非刚性形变和时空效率低下等问题是影响这类方法实用性的主要因素,为了更好地解决这类问题,本文提出一种面向重复纹理及非刚性形变的高效稠密匹配方法。方法首先,采用Deep Matching算法获得降采样后像对的匹配点集,并采用随机抽样一致算法剔除其中外点。其次,利用上一步得到的匹配结果估计相机位姿及缩放比例,以确定每个点对稠密化过程中的邻域,再对相应点对的邻域提取HOG描述符并进行卷积操作得到分数矩阵。最后,根据归一化后分数矩阵的数值以及下标距离的方差确定新的匹配点对以实现稠密化。结果在多个公共数据集上采用相同大小且宽高比为4∶3的像对进行实验,实验结果表明,本文方法具备一定的抗旋转、尺度变化与形变的能力,能够较好地完成宽基线条件下具有重复纹理及非刚性形变像对的匹配。与DeepMatching算法进行对比实验,本文方法在查准率、空间效率和时间效率上分别提高了近10%、25%和30%。结论本文提出的稠密匹配方法具有较高的查准率和时空效率,其结果可以运用于3维重建和超分辨率重建等高级图像处理技术中。 Objective Dense matching between images is the basis of 3D reconstruction, SLAM (simultaneous localization and mapping), and other advanced image processing methods. However, the problems of excessive baseline, repeated texture, non-rigid deformation, and time-space efficiency largely affect the practicability of such methods. To solve such problems, this study proposes an efficient dense matching method for repeated textures and non-rigid deformation. Method First, the source and target images are scaled αα via linear-bilinear interpolation. A series of matching points are obtained via DeepMatching (DM), which constitutes the set S, and the outer points are eliminated by random sample consensus. Second, the matching set S obtained in the previous step is used to estimate the camera pose x and scaling αα to determine the neighborhood of each point during densification. Third, the fractional matrix Sim is obtained by convoluting the HOG (histogram of gradient) descriptors extracted from the corresponding neighborhood. The fractional matrix Sim, which is composed of similarity scores between all points in the neighborhood, is the most important concept in our method because it connects two major steps:selecting the appropriate convolution region and determining the new matching point. The size and position of the convolution area, which are respectively decided by scaling factor αα and camera position x, determine the appropriate neighborhood. The selection of the above convolution neighborhood is still stable under conditions of rotation and scaling. Finally, new matching points are determined according to the values and variance of the subscript distance of the normalized fractional matrix Sim to achieve densification. This condition also means that the relative coordinates of the maximum values in each group of Sim are restored to the absolute coordinates of the input image. Result The code is implemented in VS2013 with Intel MKL2015 and Opencv3. Image pairs with the same size and an aspect ratio of 4:3 on Mikolajczyk, MPI-Sintel, and Kitti datasets are used for the experiment in an environment with a 3.8 GHz CPU and 8 GB RAM. To evaluate our method comprehensively and objectively, we select multiple sets of images with different sizes to compare the time and memory usage and precision of the proposed method with those of DeepMatching. To illustrate the problem solved by the proposed method, the method is applied to the matching of image pairs under repeated texture and non-rigid deformation conditions. Under the condition of repeated texture, the method can not only solve the matching problem under rotation and scaling conditions but also realize the matching problem of repeated texture under a wide baseline;the method also performs well in non-rigid deformation. To evaluate the time and space efficiency of the method, the same size and aspect ratio 4:3 pairs were tested on the Mikolajczyk, MPI-Sintel, and Kitti datasets, respectively. From the experiment, the proposed algorithm outperformed the DM in terms of time and space efficiency, especially in processing certain large-size images. For the convenience of comparison of the processing time, the experiment was performed on the Kitti dataset and the median of theresultswas taken as the data, when αα was seted 0.5, the execution time of the algorithm and the memory usage rate were both low and the density in the unit pixel is similar to the original image (αα=1). To evaluate the accuracy assessment of this method, a pixel was considered correct if its pixel match in the second image was closer than 8 pixels to the ground-truth, while allowing some tolerance in the blurred areas that were difficult to match exactly. Since our method used camera pose to eliminate some outer points in the process of determining the centre of the neighbourhood, so the accuracy of our method is better than the DM when the image size selected between 16 and 512, but as the image size increased to 5121 024, the proportion of DM outer points is less and less due to the increase of the number of DM inner points. The accuracy of DM and ours was basically the same. In summary, by combining the calculationresultson precision in the above datasets, the precision of the experimentalresultsof this method is determined to be better than that of the direct use of the DeepMatching algorithm (average increase of about 10%). Moreover, as the image size increases, memory and time usage increase by nearly 25% and 30%, respectively. Conclusion To verify the effectiveness of the proposed method, the time and memory usage and precision of this method are compared with those of DeepMatching in multiple public datasets. Precision and time and memory usage increase by 10%, 25%, and 30%, respectively. The effect of wide baseline, repeated texture, and non-rigid deformation on the robustness and efficiency of matchingresultsis solved. We code rotation and scaling to achieve algorithm versatility. For high versatility and practicality, we will integrate this method into advanced image processing, such as 3D reconstruction and SLAM.
作者 贾迪 赵明远 杨宁华 朱宁丹 孟琭 Jia Di;Zhao Mingyuan;Yang Ninghua;Zhu Ningdan;Meng Lu(School of Electronic and Information Engineering,Liaoning Technical University,Huludao 125105,China;College of Information Science and Engineering,Northeast University,Shenyang 110819,China)
出处 《中国图象图形学报》 CSCD 北大核心 2019年第6期924-933,共10页 Journal of Image and Graphics
基金 国家自然科学基金项目(61601213) 中国博士后面上基金项目(2017M611252) 辽宁省教育厅项目(LJYL017,LR2016045)~~
关键词 稠密匹配 非刚体 重复纹理 宽基线 方向梯度直方图 dense matching non-rigid repeated texture wide baseline histogram of oriented gradient
  • 相关文献

参考文献2

二级参考文献16

共引文献9

同被引文献23

引证文献2

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部