Aming at the problem of the low accuracy of low dynamic vehicle velocity under the environment of uneven distribution of light intensity,an improved adaptive Kalman filter method for the velocity error estimate by the...Aming at the problem of the low accuracy of low dynamic vehicle velocity under the environment of uneven distribution of light intensity,an improved adaptive Kalman filter method for the velocity error estimate by the fusion of optical flow tracking and scale mvaiant feature transform(SIFT)is proposed.The algorithm introduces anonlinear fuzzy membership function and the filter residual for the noise covariance matrix in the adaptive adjustment process.In the process of calculating the velocity of the vehicle,the tracking and matching of the inter-frame displacement a d the vehicle velocity calculation a e carried out by using the optical fow tracing and the SIF'T methods,respectively.Meanwhile,the velocity difference between theoutputs of thesetwo methods is used as the observation of the improved adaptive Kalman filter.Finally,the velocity calculated by the optical fow method is corrected by using the velocity error estimate of the output of the modified adaptive Kalman filter.The results of semi-physical experiments show that the maximum velocityeror of the fusion algorithm is decreased by29%than that of the optical fow method,and the computation time is reduced by80%compared with the SIFT method.展开更多
The ORB-SLAM2 based on the constant velocity model is difficult to determine the search window of the reprojection of map points when the objects are in variable velocity motion,which leads to a false matching,with an...The ORB-SLAM2 based on the constant velocity model is difficult to determine the search window of the reprojection of map points when the objects are in variable velocity motion,which leads to a false matching,with an inaccurate pose estimation or failed tracking.To address the challenge above,a new method of feature point matching is proposed in this paper,which combines the variable velocity model with the reverse optical flow method.First,the constant velocity model is extended to a new variable velocity model,and the expanded variable velocity model is used to provide the initial pixel shifting for the reverse optical flow method.Then the search range of feature points is accurately determined according to the results of the reverse optical flow method,thereby improving the accuracy and reliability of feature matching,with strengthened interframe tracking effects.Finally,we tested on TUM data set based on the RGB-D camera.Experimental results show that this method can reduce the probability of tracking failure and improve localization accuracy on SLAM(Simultaneous Localization and Mapping)systems.Compared with the traditional ORB-SLAM2,the test error of this method on each sequence in the TUM data set is significantly reduced,and the root mean square error is only 63.8%of the original system under the optimal condition.展开更多
Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speed...Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.展开更多
The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor l...The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.展开更多
Simultaneous location and mapping(SLAM)plays the crucial role in VR/AR application,autonomous robotics navigation,UAV remote control,etc.The traditional SLAM is not good at handle the data acquired by camera with fast...Simultaneous location and mapping(SLAM)plays the crucial role in VR/AR application,autonomous robotics navigation,UAV remote control,etc.The traditional SLAM is not good at handle the data acquired by camera with fast movement or severe jittering,and the efficiency need to be improved.The paper proposes an improved SLAM algorithm,which mainly improves the real-time performance of classical SLAM algorithm,applies KDtree for efficient organizing feature points,and accelerates the feature points correspondence building.Moreover,the background map reconstruction thread is optimized,the SLAM parallel computation ability is increased.The color images experiments demonstrate that the improved SLAM algorithm holds better realtime performance than the classical SLAM.展开更多
Current research of binocular vision systems mainly need to resolve the camera’s intrinsic parameters before the reconstruction of three-dimensional(3D)objects.The classical Zhang’calibration is hardly to calculate ...Current research of binocular vision systems mainly need to resolve the camera’s intrinsic parameters before the reconstruction of three-dimensional(3D)objects.The classical Zhang’calibration is hardly to calculate all errors caused by perspective distortion and lens distortion.Also,the image-matching algorithm of the binocular vision system still needs to be improved to accelerate the reconstruction speed of welding pool surfaces.In this paper,a preset coordinate system was utilized for camera calibration instead of Zhang’calibration.The binocular vision system was modified to capture images of welding pool surfaces by suppressing the strong arc interference during gas metal arc welding.Combining and improving the algorithms of speeded up robust features,binary robust invariant scalable keypoints,and KAZE,the feature information of points(i.e.,RGB values,pixel coordinates)was extracted as the feature vector of the welding pool surface.Based on the characteristics of the welding images,a mismatch-elimination algorithm was developed to increase the accuracy of image-matching algorithms.The world coordinates of matching feature points were calculated to reconstruct the 3D shape of the welding pool surface.The effectiveness and accuracy of the reconstruction of welding pool surfaces were verified by experimental results.This research proposes the development of binocular vision algorithms that can reconstruct the surface of welding pools accurately to realize intelligent welding control systems in the future.展开更多
基金The National Natural Science Foundation of China(No.51375087,51405203)the Transformation Program of Science and Technology Achievements of Jiangsu Province(No.BA2016139)
文摘Aming at the problem of the low accuracy of low dynamic vehicle velocity under the environment of uneven distribution of light intensity,an improved adaptive Kalman filter method for the velocity error estimate by the fusion of optical flow tracking and scale mvaiant feature transform(SIFT)is proposed.The algorithm introduces anonlinear fuzzy membership function and the filter residual for the noise covariance matrix in the adaptive adjustment process.In the process of calculating the velocity of the vehicle,the tracking and matching of the inter-frame displacement a d the vehicle velocity calculation a e carried out by using the optical fow tracing and the SIF'T methods,respectively.Meanwhile,the velocity difference between theoutputs of thesetwo methods is used as the observation of the improved adaptive Kalman filter.Finally,the velocity calculated by the optical fow method is corrected by using the velocity error estimate of the output of the modified adaptive Kalman filter.The results of semi-physical experiments show that the maximum velocityeror of the fusion algorithm is decreased by29%than that of the optical fow method,and the computation time is reduced by80%compared with the SIFT method.
基金This work was supported by The National Natural Science Foundation of China under Grant No.61304205 and NO.61502240The Natural Science Foundation of Jiangsu Province under Grant No.BK20191401 and No.BK20201136Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant No.SJCX21_0364 and No.SJCX21_0363.
文摘The ORB-SLAM2 based on the constant velocity model is difficult to determine the search window of the reprojection of map points when the objects are in variable velocity motion,which leads to a false matching,with an inaccurate pose estimation or failed tracking.To address the challenge above,a new method of feature point matching is proposed in this paper,which combines the variable velocity model with the reverse optical flow method.First,the constant velocity model is extended to a new variable velocity model,and the expanded variable velocity model is used to provide the initial pixel shifting for the reverse optical flow method.Then the search range of feature points is accurately determined according to the results of the reverse optical flow method,thereby improving the accuracy and reliability of feature matching,with strengthened interframe tracking effects.Finally,we tested on TUM data set based on the RGB-D camera.Experimental results show that this method can reduce the probability of tracking failure and improve localization accuracy on SLAM(Simultaneous Localization and Mapping)systems.Compared with the traditional ORB-SLAM2,the test error of this method on each sequence in the TUM data set is significantly reduced,and the root mean square error is only 63.8%of the original system under the optimal condition.
基金Supported by the Key Research Program of the Chinese Academy of Sciences(ZDRE-KT-2021-3)。
文摘Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.
文摘The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.
基金This work is supported by the National Natural Science Foundation of China(Grant No.61672279)Project of“Six Talents Peak”in Jiangsu(2012-WLW-023)Open Foundation of State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering,Nanjing Hydraulic Research Institute,China(2016491411).
文摘Simultaneous location and mapping(SLAM)plays the crucial role in VR/AR application,autonomous robotics navigation,UAV remote control,etc.The traditional SLAM is not good at handle the data acquired by camera with fast movement or severe jittering,and the efficiency need to be improved.The paper proposes an improved SLAM algorithm,which mainly improves the real-time performance of classical SLAM algorithm,applies KDtree for efficient organizing feature points,and accelerates the feature points correspondence building.Moreover,the background map reconstruction thread is optimized,the SLAM parallel computation ability is increased.The color images experiments demonstrate that the improved SLAM algorithm holds better realtime performance than the classical SLAM.
基金Supported by National Natural Science Foundation of China(Grant No.51775313)Major Program of Shandong Province Natural Science Foundation(Grant No.ZR2018ZC1760)Young Scholars Program of Shandong University(Grant No.2017WLJH24).
文摘Current research of binocular vision systems mainly need to resolve the camera’s intrinsic parameters before the reconstruction of three-dimensional(3D)objects.The classical Zhang’calibration is hardly to calculate all errors caused by perspective distortion and lens distortion.Also,the image-matching algorithm of the binocular vision system still needs to be improved to accelerate the reconstruction speed of welding pool surfaces.In this paper,a preset coordinate system was utilized for camera calibration instead of Zhang’calibration.The binocular vision system was modified to capture images of welding pool surfaces by suppressing the strong arc interference during gas metal arc welding.Combining and improving the algorithms of speeded up robust features,binary robust invariant scalable keypoints,and KAZE,the feature information of points(i.e.,RGB values,pixel coordinates)was extracted as the feature vector of the welding pool surface.Based on the characteristics of the welding images,a mismatch-elimination algorithm was developed to increase the accuracy of image-matching algorithms.The world coordinates of matching feature points were calculated to reconstruct the 3D shape of the welding pool surface.The effectiveness and accuracy of the reconstruction of welding pool surfaces were verified by experimental results.This research proposes the development of binocular vision algorithms that can reconstruct the surface of welding pools accurately to realize intelligent welding control systems in the future.