期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality 被引量:4
1
作者 Jinyu LI Bangbang YANG +3 位作者 Danpeng CHEN Nan WANG Guofeng ZHANG Hujun BAO 《Virtual Reality & Intelligent Hardware》 2019年第4期386-410,共25页
Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ... Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/. 展开更多
关键词 visual-inertial SLAM ODOMETRY Tracking LOCALIZATION Mapping Augmented reality
下载PDF
PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration
2
作者 Yao Xiao Xiaogang Ruan Xiaoqing Zhu 《Journal of Autonomous Intelligence》 2018年第2期29-35,共7页
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ... Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 展开更多
关键词 PHOTOMETRIC Calibration visual-inertial ODOMETRY Simultaneous Localization and Mapping Robot NAVIGATION
下载PDF
KLT-VIO:Real-time Monocular Visual-Inertial Odometry
3
作者 Yuhao Jin Hang Li Shoulin Yin 《IJLAI Transactions on Science and Engineering》 2024年第1期8-16,共9页
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte... This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation. 展开更多
关键词 visual-inertial odometry Opticalflow Point features Line features Bundle adjustment
原文传递
M2C-GVIO:motion manifold constraint aided GNSS-visual-inertial odometry for ground vehicles
4
作者 Tong Hua Ling Pei +3 位作者 Tao Li Jie Yin Guoqing Liu Wenxian Yu 《Satellite Navigation》 EI CSCD 2023年第1期77-91,I0003,共16页
Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.Howeve... Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency. 展开更多
关键词 Sensor fusion visual-inertial odometry Motion manifold constraint
原文传递
Lightweight hybrid visual-inertial odometry with closed-form zero velocity update 被引量:4
5
作者 QIU Xiaochen ZHANG Hai FU Wenxing 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第12期3344-3359,共16页
Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-... Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community. 展开更多
关键词 Inverse depth parametrization Kalman filter Online calibration visual-inertial odometry Zero velocity update
原文传递
An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid 被引量:6
6
作者 He Zhang Lingqiu Jin Cang Ye 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1389-1400,共12页
There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can ... There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces. 展开更多
关键词 Assistive navigation pose estimation robotic navigation aid(RNA) simultaneous localization and mapping visual-inertial odometry visual positioning system(VPS)
下载PDF
Concrete Defects Inspection and 3D Mapping Using City Flyer Quadrotor Robot 被引量:4
7
作者 Liang Yang Bing Li +3 位作者 Wei Li Howard Brand Biao Jiang Jizhong Xiao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2020年第4期991-1002,共12页
The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete s... The concrete aging problem has gained more attention in recent years as more bridges and tunnels in the United States lack proper maintenance. Though the Federal Highway Administration requires these public concrete structures to be inspected regularly, on-site manual inspection by human operators is time-consuming and labor-intensive. Conventional inspection approaches for concrete inspection, using RGB imagebased thresholding methods, are not able to determine metric information as well as accurate location information for assessed defects for conditions. To address this challenge, we propose a deep neural network(DNN) based concrete inspection system using a quadrotor flying robot(referred to as City Flyer) mounted with an RGB-D camera. The inspection system introduces several novel modules. Firstly, a visual-inertial fusion approach is introduced to perform camera and robot positioning and structure 3 D metric reconstruction. The reconstructed map is used to retrieve the location and metric information of the defects.Secondly, we introduce a DNN model, namely Ada Net, to detect concrete spalling and cracking, with the capability of maintaining robustness under various distances between the camera and concrete surface. In order to train the model, we craft a new dataset, i.e., the concrete structure spalling and cracking(CSSC)dataset, which is released publicly to the research community.Finally, we introduce a 3 D semantic mapping method using the annotated framework to reconstruct the concrete structure for visualization. We performed comparative studies and demonstrated that our Ada Net can achieve 8.41% higher detection accuracy than Res Nets and VGGs. Moreover, we conducted five field tests, of which three are manual hand-held tests and two are drone-based field tests. These results indicate that our system is capable of performing metric field inspection,and can serve as an effective tool for civil engineers. 展开更多
关键词 3D reconstruction concrete inspection deep neural network quadrotor flying robot visual-inertial fusion
下载PDF
Online Detection of State Estimator Performance Degradation via Efficient Numerical Observability Analysis 被引量:1
8
作者 Zheng Rong Shun'an Zhong Nathan Michael 《Journal of Beijing Institute of Technology》 EI CAS 2017年第2期259-266,共8页
An efficient observability analysis method is proposed to enable online detection of performance degradation of an optimization-based sliding window visual-inertial state estimation framework.The proposed methodology ... An efficient observability analysis method is proposed to enable online detection of performance degradation of an optimization-based sliding window visual-inertial state estimation framework.The proposed methodology leverages numerical techniques in nonlinear observability analysis to enable online evaluation of the system observability and indication of the state estimation performance.Specifically,an empirical observability Gramian based approach is introduced to efficiently measure the observability condition of the windowed nonlinear system,and a scalar index is proposed to quantify the average system observability.The proposed approach is specialized to a challenging optimizationbased sliding window monocular visual-inertial state estimation formulation and evaluated through simulation and experiments to assess the efficacy of the methodology.The analysis result shows that the proposed approach can correctly indicate degradation of the state estimation accuracy with real-time performance. 展开更多
关键词 observability analysis monocular visual-inertial state estimation sliding window non-linear optimization
下载PDF
Head Motion Detection in Gaze Based Aiming
9
作者 Minghe Cao Jianzhong Wang 《Journal of Beijing Institute of Technology》 EI CAS 2020年第1期9-15,共7页
Unmanned weapons have great potential to be widely used in future wars.The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency.Gaze direction is r... Unmanned weapons have great potential to be widely used in future wars.The gaze-based aiming technology can be applied to control pan-tilt weapon systems remotely with high precision and efficiency.Gaze direction is related to head motion,which is a combination of head and eye movements.In this paper,a head motion detection method is proposed,which is based on the fusion of inertial and vision information.The inertial sensors can measure rotation in high-frequency with good performance,while vision sensors are able to eliminate drifts.By combining the characteristics of both sensors,the proposed approach achieves the effect of highfrequency,real-time,and drift-free head motion detection.The experiments show that our method can smooth the outputs,constrain drifts of inertial measurements,and achieve high detection accuracy. 展开更多
关键词 GAZE aiming HEAD MOTION DETECTION visual-inertial information FUSION
下载PDF
Online Observability-Constrained Motion Suggestion via Efficient Motion Primitive-Based Observability Analysis
10
作者 Zheng Rong Shun'an Zhong Nathan Michael 《Journal of Beijing Institute of Technology》 EI CAS 2018年第1期92-102,共11页
An active perception methodology is proposed to locally predict the observability condition in a reasonable horizon and suggest an observability-constrained motion direction for the next step to ensure an accurate and... An active perception methodology is proposed to locally predict the observability condition in a reasonable horizon and suggest an observability-constrained motion direction for the next step to ensure an accurate and consistent state estimation performance of vision-based navigation systems. The methodology leverages an efficient EOG-based observability analysis and a motion primitive-based path sampling technique to realize the local observability prediction with a real-time performance. The observability conditions of potential motion trajectories are evaluated,and an informed motion direction is selected to ensure the observability efficiency for the state estimation system. The proposed approach is specialized to a representative optimizationbased monocular vision-based state estimation formulation and demonstrated through simulation and experiments to evaluate the ability of estimation degradation prediction and efficacy of motion direction suggestion. 展开更多
关键词 observability analysis observability prediction motion primitive motion suggestion monocular visual-inertial state estimation active perception
下载PDF
Semi-tightly coupled integration of multi-GNSS PPP and S-VINS for precise positioning in GNSS-challenged environments 被引量:10
11
作者 Xingxing Li Xuanbin Wang +3 位作者 Jianchi Liao Xin Li Shengyu Li Hongbo Lyu 《Satellite Navigation》 2021年第1期1-14,共14页
Because of its high-precision,low-cost and easy-operation,Precise Point Positioning(PPP)becomes a potential and attractive positioning technique that can be applied to self-driving cars and drones.However,the reliabil... Because of its high-precision,low-cost and easy-operation,Precise Point Positioning(PPP)becomes a potential and attractive positioning technique that can be applied to self-driving cars and drones.However,the reliability and availability of PPP will be significantly degraded in the extremely difficult conditions where Global Navigation Satellite System(GNSS)signals are blocked frequently.Inertial Navigation System(INS)has been integrated with GNSS to ameliorate such situations in the last decades.Recently,the Visual-Inertial Navigation Systems(VINS)with favorable complementary characteristics is demonstrated to realize a more stable and accurate local position estimation than the INS-only.Nevertheless,the system still must rely on the global positions to eliminate the accumulated errors.In this contribution,we present a semi-tight coupling framework of multi-GNSS PPP and Stereo VINS(S-VINS),which achieves the bidirectional location transfer and sharing in two separate navigation systems.In our approach,the local positions,produced by S-VINS are integrated with multi-GNSS PPP through a graph-optimization based method.Furthermore,the accurate forecast positions with S-VINS are fed back to assist PPP in GNSS-challenged environments.The statistical analysis of a GNSS outage simulation test shows that the S-VINS mode can effectively suppress the degradation of positioning accuracy compared with the INS-only mode.We also carried out a vehicle-borne experiment collecting multi-sensor data in a GNSS-challenged environment.For the complex driving environment,the PPP positioning capability is significantly improved with the aiding of S-VINS.The 3D positioning accuracy is improved by 49.0%for Global Positioning System(GPS),40.3%for GPS+GLOANSS(Global Navigation Satellite System),45.6%for GPS+BDS(BeiDou navigation satellite System),and 51.2%for GPS+GLONASS+BDS.On this basis,the solution with the semi-tight coupling scheme of multi-GNSS PPP/S-VINS achieves the improvements of 41.8-60.6%in 3D position-ing accuracy compared with the multi-GNSS PPP/INS solutions. 展开更多
关键词 Multi-GNSS PPP visual-inertial odometry Multi-sensor fusion GNSS-challenged environment Autonomous driving
原文传递
A Fast Vision-inertial Odometer Based on Line Midpoint Descriptor
12
作者 Wen-Kuan Li Hao-Yuan Cai +2 位作者 Sheng-Lin Zhao Ya-Qian Liu Chun-Xiu Liu 《International Journal of Automation and computing》 EI CSCD 2021年第4期667-679,共13页
Visual simultaneous localization and mapping(VSLAM) are essential technologies to realize the autonomous movement of vehicles. Visual-inertial odometry(VIO) is often used as the front-end of VSLAM because of its rich ... Visual simultaneous localization and mapping(VSLAM) are essential technologies to realize the autonomous movement of vehicles. Visual-inertial odometry(VIO) is often used as the front-end of VSLAM because of its rich information, lightweight, and robustness. This article proposes the FPL-VIO, an optimization-based fast vision-inertial odometer with points and lines. Traditional VIO mostly uses points as landmarks;meanwhile, most of the geometrical structure information is ignored. Therefore, the accuracy will be jeopardized under motion blur and texture-less area. Some researchers improve accuracy by adding lines as landmarks in the system.However, almost all of them use line segment detector(LSD) and line band descriptor(LBD) in line processing, which is very time-consuming. This article first proposes a fast line feature description and matching method based on the midpoint and compares the three line detection algorithms of LSD, fast line detector(FLD), and edge drawing lines(EDLines). Then, the measurement model of the line is introduced in detail. Finally, FPL-VIO is proposed by adding the above method to monocular visual-inertial state estimator(VINSMono), an optimization-based fast vision-inertial odometer with lines described by midpoint and points. Compared with VIO using points and lines(PL-VIO), the line processing efficiency of FPL-VIO is increased by 3-4 times while ensuring the same accuracy. 展开更多
关键词 High efficiency visual-inertial odometry(VIO) non-linear optimization points and lines sliding window
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部