期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid 被引量:3
1
作者 He Zhang Lingqiu Jin Cang Ye 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1389-1400,共12页
There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can ... There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces. 展开更多
关键词 Assistive navigation pose estimation robotic navigation aid(RNA) simultaneous localization and mapping visual-inertial odometry visual positioning system(VPS)
下载PDF
A Visual Indoor Localization Method Based on Efficient Image Retrieval
2
作者 Mengyan Lyu Xinxin Guo +1 位作者 Kunpeng Zhang Liye Zhang 《Journal of Computer and Communications》 2024年第2期47-66,共20页
The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor l... The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method. 展开更多
关键词 visual Indoor positioning Feature Point Matching Image Retrieval Position Calculation Five-Point Method
下载PDF
Research on DSO vision positioning technology based on binocular stereo panoramic vision system 被引量:1
3
作者 Xiao-dong Guo Zhou-bo Wang +4 位作者 Wei Zhu Guang He Hong-bin Deng Cai-xia Lv Zhen-hai Zhang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2022年第4期593-603,共11页
In the visual positioning of Unmanned Ground Vehicle(UGV),the visual odometer based on direct sparse method(DSO) has the advantages of small amount of calculation,high real-time performance and high robustness,so it i... In the visual positioning of Unmanned Ground Vehicle(UGV),the visual odometer based on direct sparse method(DSO) has the advantages of small amount of calculation,high real-time performance and high robustness,so it is more widely used than the visual odometer based on feature point method.Ordinary vision sensors have a narrower viewing angle than panoramic vision sensors,and there are fewer road signs in a single frame of image,resulting in poor road sign tracking and positioning capabilities,and severely restricting the development of visual odometry.Based on these considerations,this paper proposes a binocular stereo panoramic vision positioning algorithm based on extended DSO,which can solve these problems well.The experimental results show that the binocular stereo panoramic vision positioning algorithm based on the extended DSO can directly obtain the panoramic depth image around the UGV,which greatly improves the accuracy and robustness of the visual positioning compared with other ordinary visual odometers.It will have widely application prospects in the UGV field in the future. 展开更多
关键词 Panoramic vision DSO visual positioning
下载PDF
Uncalibrated Workpiece Positioning Method for Peg-in-hole Assembly Using an Industrial Robot 被引量:1
4
作者 Ming CONG Fukang ZHU +1 位作者 Dong LIU Yu DU 《Instrumentation》 2019年第4期26-36,共11页
This paper proposes an uncalibrated workpiece positioning method for peg-in-hole assembly of a device using an industrial robot.Depth images are used to identify and locate the workpieces when a peg-in-hole assembly t... This paper proposes an uncalibrated workpiece positioning method for peg-in-hole assembly of a device using an industrial robot.Depth images are used to identify and locate the workpieces when a peg-in-hole assembly task is carried out by an industrial robot in a flexible production system.First,the depth image is thresholded according to the depth data of the workpiece surface so as to filter out the background interference.Second,a series of image processing and the feature recognition algorithms are executed to extract the outer contour features and locate the center point position.This image information,fed by the vision system,will drive the robot to achieve the positioning,approximately.Finally,the Hough circle detection algorithm is used to extract the features and the relevant parameters of the circular hole where the assembly would be done,on the color image,for accurate positioning.The experimental result shows that the positioning accuracy of this method is between 0.6-1.2 mm,in the used experimental system.The entire positioning process need not require complicated calibration,and the method is highly flexible.It is suitable for the automatic assembly tasks with multi-specification or in small batches,in a flexible production system. 展开更多
关键词 Uncalibrated Workpiece positioning Industrial Robot visual positioning Peg-in-hole Assembly
下载PDF
Automatic UAV Positioning with Encoded Sign as Cooperative Target
5
作者 Xu Zhongxiong Shao Guiwei +2 位作者 Wu Liang Xie Yuxing Ji Zheng 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2017年第6期669-679,共11页
In order to achieve the goal that unmanned aerial vehicle(UAV)automatically positioning during power inspection,a visual positioning method which utilizes encoded sign as cooperative target is proposed.Firstly,we disc... In order to achieve the goal that unmanned aerial vehicle(UAV)automatically positioning during power inspection,a visual positioning method which utilizes encoded sign as cooperative target is proposed.Firstly,we discuss how to design the encoded sign and propose a robust decoding algorithm based on contour.Secondly,the Adaboost algorithm is used to train a classifier which can detect the encoded sign from image.Lastly,the position of UAV can be calculated by using the projective relation between the object points and their corresponding image points.Experiment includes two parts.First,simulated video data is used to verify the feasibility of the proposed method,and the results show that the average absolute error in each direction is below 0.02 m.Second,a video,acquired from an actual UAV flight,is used to calculate the position of UAV.The results show that the calculated trajectory is consistent with the actual flight path.The method runs at a speed of 0.153 sper frame. 展开更多
关键词 unmanned aerial vehicle(UAV) cooperative target encoded sign visual positioning
下载PDF
Multi-Modal Scene Matching Location Algorithm Based on M2Det
6
作者 Jiwei Fan Xiaogang Yang +2 位作者 Ruitao Lu Qingge Li Siyu Wang 《Computers, Materials & Continua》 SCIE EI 2023年第10期1031-1052,共22页
In recent years,many visual positioning algorithms have been proposed based on computer vision and they have achieved good results.However,these algorithms have a single function,cannot perceive the environment,and ha... In recent years,many visual positioning algorithms have been proposed based on computer vision and they have achieved good results.However,these algorithms have a single function,cannot perceive the environment,and have poor versatility,and there is a certain mismatch phenomenon,which affects the positioning accuracy.Therefore,this paper proposes a location algorithm that combines a target recognition algorithm with a depth feature matching algorithm to solve the problem of unmanned aerial vehicle(UAV)environment perception and multi-modal image-matching fusion location.This algorithm was based on the single-shot object detector based on multi-level feature pyramid network(M2Det)algorithm and replaced the original visual geometry group(VGG)feature extraction network with the ResNet-101 network to improve the feature extraction capability of the network model.By introducing a depth feature matching algorithm,the algorithm shares neural network weights and realizes the design of UAV target recognition and a multi-modal image-matching fusion positioning algorithm.When the reference image and the real-time image were mismatched,the dynamic adaptive proportional constraint and the random sample consensus consistency algorithm(DAPC-RANSAC)were used to optimize the matching results to improve the correct matching efficiency of the target.Using the multi-modal registration data set,the proposed algorithm was compared and analyzed to verify its superiority and feasibility.The results show that the algorithm proposed in this paper can effectively deal with the matching between multi-modal images(visible image–infrared image,infrared image–satellite image,visible image–satellite image),and the contrast,scale,brightness,ambiguity deformation,and other changes had good stability and robustness.Finally,the effectiveness and practicability of the algorithm proposed in this paper were verified in an aerial test scene of an S1000 sixrotor UAV. 展开更多
关键词 visual positioning multi-modal scene matching unmanned aerial vehicle
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部