Estimating rigid transformation using noisy correspondences is critical to feature-based point cloud registration.Recently,a series of studies have attempted to combine traditional robust model fitting with deep learn...Estimating rigid transformation using noisy correspondences is critical to feature-based point cloud registration.Recently,a series of studies have attempted to combine traditional robust model fitting with deep learning.Among them,DHVR proposed a hough voting-based method,achieving new state-of-the-art performance.However,we find voting on rotation and translation simultaneously hinders achieving better performance.Therefore,we proposed a new hough voting-based method,which decouples rotation and translation space.Specifically,we first utilize hough voting and a neural network to estimate rotation.Then based on good initialization on rotation,we can easily obtain accurate rigid transformation.Extensive experiments on 3DMatch and 3DLoMatch datasets show that our method achieves comparable performances over the state-of-the-art methods.We further demonstrate the generalization of our method by experimenting on KITTI dataset.展开更多
基金supported by the National Natural Science Foundation of China (Grant No.62076070)the Science and Technology Innovation Action Plan of Shanghai (No.23S41900400).
文摘Estimating rigid transformation using noisy correspondences is critical to feature-based point cloud registration.Recently,a series of studies have attempted to combine traditional robust model fitting with deep learning.Among them,DHVR proposed a hough voting-based method,achieving new state-of-the-art performance.However,we find voting on rotation and translation simultaneously hinders achieving better performance.Therefore,we proposed a new hough voting-based method,which decouples rotation and translation space.Specifically,we first utilize hough voting and a neural network to estimate rotation.Then based on good initialization on rotation,we can easily obtain accurate rigid transformation.Extensive experiments on 3DMatch and 3DLoMatch datasets show that our method achieves comparable performances over the state-of-the-art methods.We further demonstrate the generalization of our method by experimenting on KITTI dataset.