Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ...Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/.展开更多
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ...Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono.展开更多
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte...This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation.展开更多
Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.Howeve...Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency.展开更多
Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-...Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community.展开更多
The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete s...The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete structures to be inspected regularly, on-site manual inspection by human operators is time-consuming and labor-intensive. Conventional inspection approaches for concrete inspection, using RGB imagebased thresholding methods, are not able to determine metric information as well as accurate location information for assessed defects for conditions. To address this challenge, we propose a deep neural network(DNN) based concrete inspection system using a quadrotor flying robot(referred to as City Flyer) mounted with an RGB-D camera. The inspection system introduces several novel modules. Firstly, a visual-inertial fusion approach is introduced to perform camera and robot positioning and structure 3 D metric reconstruction. The reconstructed map is used to retrieve the location and metric information of the defects.Secondly, we introduce a DNN model, namely Ada Net, to detect concrete spalling and cracking, with the capability of maintaining robustness under various distances between the camera and concrete surface. In order to train the model, we craft a new dataset, i.e., the concrete structure spalling and cracking(CSSC)dataset, which is released publicly to the research community.Finally, we introduce a 3 D semantic mapping method using the annotated framework to reconstruct the concrete structure for visualization. We performed comparative studies and demonstrated that our Ada Net can achieve 8.41% higher detection accuracy than Res Nets and VGGs. Moreover, we conducted five field tests, of which three are manual hand-held tests and two are drone-based field tests. These results indicate that our system is capable of performing metric field inspection,and can serve as an effective tool for civil engineers.展开更多
There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can ...There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces.展开更多
An efficient observability analysis method is proposed to enable online detection of performance degradation of an optimization-based sliding window visual-inertial state estimation framework.The proposed methodology ...An efficient observability analysis method is proposed to enable online detection of performance degradation of an optimization-based sliding window visual-inertial state estimation framework.The proposed methodology leverages numerical techniques in nonlinear observability analysis to enable online evaluation of the system observability and indication of the state estimation performance.Specifically,an empirical observability Gramian based approach is introduced to efficiently measure the observability condition of the windowed nonlinear system,and a scalar index is proposed to quantify the average system observability.The proposed approach is specialized to a challenging optimizationbased sliding window monocular visual-inertial state estimation formulation and evaluated through simulation and experiments to assess the efficacy of the methodology.The analysis result shows that the proposed approach can correctly indicate degradation of the state estimation accuracy with real-time performance.展开更多
Unmanned weapons have great potential to be widely used in future wars.The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency.Gaze direction is r...Unmanned weapons have great potential to be widely used in future wars.The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency.Gaze direction is related to head motion,which is a combination of head and eye movements.In this paper,a head motion detection method is proposed,which is based on the fusion of inertial and vision information.The inertial sensors can measure rotation in high-frequency with good performance,while vision sensors are able to eliminate drifts.By combining the characteristics of both sensors,the proposed approach achieves the effect of highfrequency,real-time,and drift-free head motion detection.The experiments show that our method can smooth the outputs,constrain drifts of inertial measurements,and achieve high detection accuracy.展开更多
An active perception methodology is proposed to locally predict the observability condition in a reasonable horizon and suggest an observability-constrained motion direction for the next step to ensure an accurate and...An active perception methodology is proposed to locally predict the observability condition in a reasonable horizon and suggest an observability-constrained motion direction for the next step to ensure an accurate and consistent state estimation performance of vision-based navigation systems. The methodology leverages an efficient EOG-based observability analysis and a motion primitive-based path sampling technique to realize the local observability prediction with a real-time performance. The observability conditions of potential motion trajectories are evaluated,and an informed motion direction is selected to ensure the observability efficiency for the state estimation system. The proposed approach is specialized to a representative optimizationbased monocular vision-based state estimation formulation and demonstrated through simulation and experiments to evaluate the ability of estimation degradation prediction and efficacy of motion direction suggestion.展开更多
Because of its high-precision,low-cost and easy-operation,Precise Point Positioning(PPP)becomes a potential and attractive positioning technique that can be applied to self-driving cars and drones.However,the reliabil...Because of its high-precision,low-cost and easy-operation,Precise Point Positioning(PPP)becomes a potential and attractive positioning technique that can be applied to self-driving cars and drones.However,the reliability and availability of PPP will be significantly degraded in the extremely difficult conditions where Global Navigation Satellite System(GNSS)signals are blocked frequently.Inertial Navigation System(INS)has been integrated with GNSS to ameliorate such situations in the last decades.Recently,the Visual-Inertial Navigation Systems(VINS)with favorable complementary characteristics is demonstrated to realize a more stable and accurate local position estimation than the INS-only.Nevertheless,the system still must rely on the global positions to eliminate the accumulated errors.In this contribution,we present a semi-tight coupling framework of multi-GNSS PPP and Stereo VINS(S-VINS),which achieves the bidirectional location transfer and sharing in two separate navigation systems.In our approach,the local positions,produced by S-VINS are integrated with multi-GNSS PPP through a graph-optimization based method.Furthermore,the accurate forecast positions with S-VINS are fed back to assist PPP in GNSS-challenged environments.The statistical analysis of a GNSS outage simulation test shows that the S-VINS mode can effectively suppress the degradation of positioning accuracy compared with the INS-only mode.We also carried out a vehicle-borne experiment collecting multi-sensor data in a GNSS-challenged environment.For the complex driving environment,the PPP positioning capability is significantly improved with the aiding of S-VINS.The 3D positioning accuracy is improved by 49.0%for Global Positioning System(GPS),40.3%for GPS+GLOANSS(Global Navigation Satellite System),45.6%for GPS+BDS(BeiDou navigation satellite System),and 51.2%for GPS+GLONASS+BDS.On this basis,the solution with the semi-tight coupling scheme of multi-GNSS PPP/S-VINS achieves the improvements of 41.8-60.6%in 3D position-ing accuracy compared with the multi-GNSS PPP/INS solutions.展开更多
Visual simultaneous localization and mapping(VSLAM) are essential technologies to realize the autonomous movement of vehicles. Visual-inertial odometry(VIO) is often used as the front-end of VSLAM because of its rich ...Visual simultaneous localization and mapping(VSLAM) are essential technologies to realize the autonomous movement of vehicles. Visual-inertial odometry(VIO) is often used as the front-end of VSLAM because of its rich information, lightweight, and robustness. This article proposes the FPL-VIO, an optimization-based fast vision-inertial odometer with points and lines. Traditional VIO mostly uses points as landmarks;meanwhile, most of the geometrical structure information is ignored. Therefore, the accuracy will be jeopardized under motion blur and texture-less area. Some researchers improve accuracy by adding lines as landmarks in the system.However, almost all of them use line segment detector(LSD) and line band descriptor(LBD) in line processing, which is very time-consuming. This article first proposes a fast line feature description and matching method based on the midpoint and compares the three line detection algorithms of LSD, fast line detector(FLD), and edge drawing lines(EDLines). Then, the measurement model of the line is introduced in detail. Finally, FPL-VIO is proposed by adding the above method to monocular visual-inertial state estimator(VINSMono), an optimization-based fast vision-inertial odometer with lines described by midpoint and points. Compared with VIO using points and lines(PL-VIO), the line processing efficiency of FPL-VIO is increased by 3-4 times while ensuring the same accuracy.展开更多
基金the National Key Research and Development Program of China(2016YFB1001501)NSF of China(61672457)+1 种基金the Fundamental Research Funds for the Central Universities(2018FZA5011)Zhejiang University-SenseTime Joint Lab of 3D Vision.
文摘Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/.
基金support from National Natural Science Foundation of China (No.61375086)Key Project (No.KZ201610005010) of S&T Plan of Beijing Municipal Commission of EducationBeijing Natural Science Foundation(4174083).
文摘Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono.
文摘This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation.
基金the National Nature Science Foundation of China(NSFC)under Grant No.62273229the Equipment PreResearch Field Foundation under Grant No.80913010303.
文摘Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency.
基金supported by the National Key Research and Development Program of China(Nos.2016YFB0502004,2017YFC0821102)。
文摘Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community.
基金supported in part by the U.S.National Science Foundation(IIP-1915721)the U.S.Department of TransportationOffice of the Assistant Secretary for Research and Technology(USDOTOST-R)(69A3551747126)through INSPIRE University Transportation Center(http//inspire-utc.mst.edu)at Missouri University of Science and Technology。
文摘The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete structures to be inspected regularly, on-site manual inspection by human operators is time-consuming and labor-intensive. Conventional inspection approaches for concrete inspection, using RGB imagebased thresholding methods, are not able to determine metric information as well as accurate location information for assessed defects for conditions. To address this challenge, we propose a deep neural network(DNN) based concrete inspection system using a quadrotor flying robot(referred to as City Flyer) mounted with an RGB-D camera. The inspection system introduces several novel modules. Firstly, a visual-inertial fusion approach is introduced to perform camera and robot positioning and structure 3 D metric reconstruction. The reconstructed map is used to retrieve the location and metric information of the defects.Secondly, we introduce a DNN model, namely Ada Net, to detect concrete spalling and cracking, with the capability of maintaining robustness under various distances between the camera and concrete surface. In order to train the model, we craft a new dataset, i.e., the concrete structure spalling and cracking(CSSC)dataset, which is released publicly to the research community.Finally, we introduce a 3 D semantic mapping method using the annotated framework to reconstruct the concrete structure for visualization. We performed comparative studies and demonstrated that our Ada Net can achieve 8.41% higher detection accuracy than Res Nets and VGGs. Moreover, we conducted five field tests, of which three are manual hand-held tests and two are drone-based field tests. These results indicate that our system is capable of performing metric field inspection,and can serve as an effective tool for civil engineers.
基金supported by the NIBIB and the NEI of the National Institutes of Health(R01EB018117)。
文摘There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces.
文摘An efficient observability analysis method is proposed to enable online detection of performance degradation of an optimization-based sliding window visual-inertial state estimation framework.The proposed methodology leverages numerical techniques in nonlinear observability analysis to enable online evaluation of the system observability and indication of the state estimation performance.Specifically,an empirical observability Gramian based approach is introduced to efficiently measure the observability condition of the windowed nonlinear system,and a scalar index is proposed to quantify the average system observability.The proposed approach is specialized to a challenging optimizationbased sliding window monocular visual-inertial state estimation formulation and evaluated through simulation and experiments to assess the efficacy of the methodology.The analysis result shows that the proposed approach can correctly indicate degradation of the state estimation accuracy with real-time performance.
基金Supported by Defense Industrial Technology Development Program(JCKY2017602C016)。
文摘Unmanned weapons have great potential to be widely used in future wars.The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency.Gaze direction is related to head motion,which is a combination of head and eye movements.In this paper,a head motion detection method is proposed,which is based on the fusion of inertial and vision information.The inertial sensors can measure rotation in high-frequency with good performance,while vision sensors are able to eliminate drifts.By combining the characteristics of both sensors,the proposed approach achieves the effect of highfrequency,real-time,and drift-free head motion detection.The experiments show that our method can smooth the outputs,constrain drifts of inertial measurements,and achieve high detection accuracy.
文摘An active perception methodology is proposed to locally predict the observability condition in a reasonable horizon and suggest an observability-constrained motion direction for the next step to ensure an accurate and consistent state estimation performance of vision-based navigation systems. The methodology leverages an efficient EOG-based observability analysis and a motion primitive-based path sampling technique to realize the local observability prediction with a real-time performance. The observability conditions of potential motion trajectories are evaluated,and an informed motion direction is selected to ensure the observability efficiency for the state estimation system. The proposed approach is specialized to a representative optimizationbased monocular vision-based state estimation formulation and demonstrated through simulation and experiments to evaluate the ability of estimation degradation prediction and efficacy of motion direction suggestion.
基金the National Natural Science Foundation of China(Grant No.41774030,Grant 41974027)the Hubei Province Natural Science Foundation of China(Grant No.2018CFA081)+1 种基金the National Youth Thousand Talents Program,the frontier project of basic application from Wuhan science and technology bureau(Grant No.2019010701011395)the Sino-German mobility programme(Grant No.M-0054).
文摘Because of its high-precision,low-cost and easy-operation,Precise Point Positioning(PPP)becomes a potential and attractive positioning technique that can be applied to self-driving cars and drones.However,the reliability and availability of PPP will be significantly degraded in the extremely difficult conditions where Global Navigation Satellite System(GNSS)signals are blocked frequently.Inertial Navigation System(INS)has been integrated with GNSS to ameliorate such situations in the last decades.Recently,the Visual-Inertial Navigation Systems(VINS)with favorable complementary characteristics is demonstrated to realize a more stable and accurate local position estimation than the INS-only.Nevertheless,the system still must rely on the global positions to eliminate the accumulated errors.In this contribution,we present a semi-tight coupling framework of multi-GNSS PPP and Stereo VINS(S-VINS),which achieves the bidirectional location transfer and sharing in two separate navigation systems.In our approach,the local positions,produced by S-VINS are integrated with multi-GNSS PPP through a graph-optimization based method.Furthermore,the accurate forecast positions with S-VINS are fed back to assist PPP in GNSS-challenged environments.The statistical analysis of a GNSS outage simulation test shows that the S-VINS mode can effectively suppress the degradation of positioning accuracy compared with the INS-only mode.We also carried out a vehicle-borne experiment collecting multi-sensor data in a GNSS-challenged environment.For the complex driving environment,the PPP positioning capability is significantly improved with the aiding of S-VINS.The 3D positioning accuracy is improved by 49.0%for Global Positioning System(GPS),40.3%for GPS+GLOANSS(Global Navigation Satellite System),45.6%for GPS+BDS(BeiDou navigation satellite System),and 51.2%for GPS+GLONASS+BDS.On this basis,the solution with the semi-tight coupling scheme of multi-GNSS PPP/S-VINS achieves the improvements of 41.8-60.6%in 3D position-ing accuracy compared with the multi-GNSS PPP/INS solutions.
基金by National Natural Science Foundation of China(Nos.61774157 and 81771388)Beijing National Natural Science Foundation of China(No.4182075)National Key Research and Development.Plan(Nos.2020YFC2004501,2020YFC2004503 and 2017YFF0107704).
文摘Visual simultaneous localization and mapping(VSLAM) are essential technologies to realize the autonomous movement of vehicles. Visual-inertial odometry(VIO) is often used as the front-end of VSLAM because of its rich information, lightweight, and robustness. This article proposes the FPL-VIO, an optimization-based fast vision-inertial odometer with points and lines. Traditional VIO mostly uses points as landmarks;meanwhile, most of the geometrical structure information is ignored. Therefore, the accuracy will be jeopardized under motion blur and texture-less area. Some researchers improve accuracy by adding lines as landmarks in the system.However, almost all of them use line segment detector(LSD) and line band descriptor(LBD) in line processing, which is very time-consuming. This article first proposes a fast line feature description and matching method based on the midpoint and compares the three line detection algorithms of LSD, fast line detector(FLD), and edge drawing lines(EDLines). Then, the measurement model of the line is introduced in detail. Finally, FPL-VIO is proposed by adding the above method to monocular visual-inertial state estimator(VINSMono), an optimization-based fast vision-inertial odometer with lines described by midpoint and points. Compared with VIO using points and lines(PL-VIO), the line processing efficiency of FPL-VIO is increased by 3-4 times while ensuring the same accuracy.