期刊文献+
共找到31篇文章
< 1 2 >
每页显示 20 50 100
Novel camera calibration method based on invariance of collinear points and pole-polar constraint
1
作者 WEI Liang ZHANG Guiyang +1 位作者 HUO Ju XUE Muyao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第3期744-753,共10页
To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of t... To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach. 展开更多
关键词 camera calibration cross ratio invariance infinite lines eccentricity error compensate
下载PDF
NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION 被引量:2
2
作者 PanFeng WangXuanyin 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2004年第4期507-510,共4页
A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology base... A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust. 展开更多
关键词 camera calibration camera rectification Linear rectification
下载PDF
Camera Calibration for 3D Color Scanning System Using Virtual 3D Model 被引量:1
3
作者 SUN Xian-bin LI De-hua +1 位作者 YIN Jie YAO Xun 《Computer Aided Drafting,Design and Manufacturing》 2007年第2期77-81,共5页
关键词 computer vision 3D scan camera calibration calibration plate
下载PDF
A Review of RGB-D Camera Calibration Methods 被引量:1
4
作者 Chenyang ZHANG Teng HUANG Yueqian SHEN 《Journal of Geodesy and Geoinformation Science》 2021年第4期11-33,共23页
RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of... RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded. 展开更多
关键词 RGB-D camera calibration relative pose depth correction multiple RGB-D cameras
下载PDF
Flexible Planar-Scene Camera Calibration Technique
5
作者 ZhangYong-jun ZhangZu-xun ZhangJian-qing 《Wuhan University Journal of Natural Sciences》 CAS 2003年第04A期1090-1096,共7页
A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed usin... A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT. A novel algorithm to obtain the initial value of principal point is put forward. Proof of Critical Motion Sequences for calibration is given in detail. The practical decomposition algorithm of exterior parameters using initial values of principal point, focal length and 2D-DLT parameters is discussed elaborately. Planar\|scene camera calibration algorithm with bundle adjustment is addressed. Very good results have been obtained with both computer simulations and real data calibration. The calibration result can be used in some high precision applications, such as reverse engineering and industrial inspection. 展开更多
关键词 camera calibration 2D-DLT bundle adjustment planar grid critical motion sequences lens distortion
下载PDF
Camera calibration method for an infrared horizon sensor with a large field of view
6
作者 Huajian DENG Hao WANG +2 位作者 Xiaoya HAN Yang LIU Zhonghe JIN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第1期141-153,共13页
Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the... Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30%higher than those of existing methods. 展开更多
关键词 Infrared horizon sensor Ultra-field infrared camera camera calibration
原文传递
A Linear Approach for Depth and Colour Camera Calibration Using Hybrid Parameters 被引量:4
7
作者 Ke-Li Cheng Xuan Ju +3 位作者 Ruo-Feng Tong Min Tang Jian Chang Jian-Jun Zhang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第3期479-488,共10页
Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking di... Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters. 展开更多
关键词 camera calibration depth camera linear optimization camera pair KINECT
原文传递
Velocity Calculation by Automatic Camera Calibration Based on Homogenous Fog Weather Condition 被引量:4
8
作者 Hong-Jun Song Yang-Zhou Chen Yuan-Yuan Gao 《International Journal of Automation and computing》 EI CSCD 2013年第2期143-156,共14页
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middl... A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm. 展开更多
关键词 Vehicle velocity calculation homogenous fog weather condition dark channel prior MONOCULAR camera calibration
原文传递
Segment Based Camera Calibration 被引量:2
9
作者 马颂德 魏国庆 黄金风 《Journal of Computer Science & Technology》 SCIE EI CSCD 1993年第1期11-16,共6页
The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration... The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in which camera parameters are determined by a set of 3D lines.A set of constraints is derived on camera parameters in terms of perspective line mapping.From these con- straints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Lin,Huang and Faugeras for camera location determination in which at least 8 line correspondences are re- quired for linear computation of camera location.Since line segments in an image can be located easi- ly and more accurately than points,the use of lines as calibration reference tends to ease the compu- tation in image preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported. 展开更多
关键词 camera calibration line correspondences perspective transformation matrix 3D reconstruction
原文传递
Effective Self-calibration for Camera Parameters and Hand-eye Geometry Based on Two Feature Points Motions 被引量:2
10
作者 Jia Sun Peng Wang +1 位作者 Zhengke Qin Hong Qiao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第2期370-380,共11页
A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibra... A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibration procedure is based on two arbitrary feature points of the environment, and three pure translational motions and two rotational motions of robot endeffector are needed. New linear solution equations are deduced,and the calibration parameters are finally solved accurately and effectively. The proposed algorithm has been verified by simulated data with different noise and disturbance. Because of the need of fewer feature points and robot motions, the proposed method greatly improves the efficiency and practicality of the calibration procedure. 展开更多
关键词 camera calibration hand-eye calibration robot vision two feature points
下载PDF
Optical focal plane based on MEMS light lead-in for geometric camera calibration 被引量:2
11
作者 Jin Li Zilong Liu 《Microsystems & Nanoengineering》 EI CSCD 2017年第1期52-58,共7页
The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-i... The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-in,resulting in a relatively unreliable calibration accuracy.Here we demonstrate a multi-aperture micro-electro-mechanical system(MEMS)light lead-in device that is located at the optical focal plane of the collimator used to calibrate the geometric distortion in cameras.Without additional volume or power consumption,the random errors of this calibration system are decreased by the multi-image matrix.With this new construction and a method for implementing the system,the reliability of high-accuracy calibration of optical cameras is guaranteed. 展开更多
关键词 camera calibration MEMS light lead-in ROBUSTNESS
原文传递
A universal method for camera calibration in UITS scenes
12
作者 陈兆学 施鹏飞 《Chinese Optics Letters》 SCIE EI CAS CSCD 2005年第2期69-72,共4页
A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected li... A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected lines with known slope to gain the focal length and orientation parameters of a camera. A set of equations that computes related camera parameters has been derived from geometric properties of the calibration pattern. With accurate analytical implementation, precision of the approach is only decided by accuracy of the calibration target selecting. Final experimental results have showed its validity by a snapshot from real automatic visual traffic surveillance (AVTS) scencs. 展开更多
关键词 LINE A universal method for camera calibration in UITS scenes
原文传递
Camera Calibration Method Based on Self-made 3D Target
13
作者 Yanyu Liu Zhibo Chen 《国际计算机前沿大会会议论文集》 2021年第1期406-416,共11页
A camera calibration algorithm based on self-made target is proposedin this paper, which can solve the difficulty of making high precision 3Dtarget. Theself-made target consists of two intersecting chess board. With t... A camera calibration algorithm based on self-made target is proposedin this paper, which can solve the difficulty of making high precision 3Dtarget. Theself-made target consists of two intersecting chess board. With the classic scalemethod, the 3D coordinates of selected points in the target were derived from thedistance matrix. The element in distance matrix is the distance between every twopoints, which can be obtained by measurement. The spatial location precision ofpoints in the target was ensured by measurement instead of manufacturing, whichreduced the production cost and the requirements for the production accuracygreatly. Camera calibration was completed using 3D target based method. It canbe further extended to the applications where the target cannot be produced. Theexperimental results show the validity of this method. 展开更多
关键词 Computer vision Image processing camera calibration 3D target
原文传递
Constructing a Virtual Large Reference Plate with High-precision for Calibrating Cameras with Large FOV
14
作者 LIU Dong ZHANG Rui +1 位作者 ZHANG Jin LI Weishi 《Instrumentation》 2023年第2期1-8,共8页
It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibra... It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm. 展开更多
关键词 camera calibration Large Flied of View Laser Interferometer Virtual Reference Plate
下载PDF
Development and calibration of the Moon-based EUV camera for Chang'e-3 被引量:4
15
作者 Bo Chen Ke-Fei Song +31 位作者 Zhao-Hui Li Qing-Wen Wu Qi-Liang Ni Xiao-Dong Wang Jin-Jiang Xie Shi-Jie Liu Ling-Ping He Fei He Xiao-Guang Wang Bin Chen Hong-Ji Zhang Xiao-Dong Wang Hai-Feng Wang Xin Zheng Shu-Lin E Yong-Cheng Wang Tao Yu Liang Sun Jin-Ling Wang Zhi Wang Liang Yang Qing-Long Hu Ke Qiao Zhong-Su Wang Xian-Wei Yang Hai-Ming Bao Wen-Guang Liu Zhe Li Ya Chen Yang Gao Hui Sun Wen-Chang Chen 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2014年第12期1654-1663,共10页
The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The E... The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mech- anism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geomet- ric calibration, the absolute photometric calibration and the relative photometric cal- ibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy. 展开更多
关键词 Chang'e-3 -- EUV camera development: calibration -- Earth's plas-masphere -- lunar exploration
下载PDF
Robot stereo vision calibration method with genetic algorithm and particle swarm optimization 被引量:1
16
作者 汪首坤 李德龙 +1 位作者 郭俊杰 王军政 《Journal of Beijing Institute of Technology》 EI CAS 2013年第2期213-221,共9页
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ... Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation. 展开更多
关键词 robot stereo vision camera calibration genetic algorithm (GA) particle swarm opti-mization (PSO) hybrid intelligent optimization
下载PDF
Panoramic camera on the Yutu lunar rover of the Chang'e-3 mission
17
作者 Jian-Feng Yang Chun-Lai Li +10 位作者 Bin Xue Ping Ruan Wei Gao Wei-Dong Qiao Di Lu Xiao-Long Ma Fu Li Ying-Hong He Ting Li Xin Ren Xing-Tao Yan 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2015年第11期1867-1880,共14页
The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the... The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the principle of binocular stereo vision. By rotating and pitching the mast, it can take several photographs of the patrol area. After stitching these images, panoramic images of the scenes will be obtained.Thus the topography and geomorphology of the patrol area and the impact crater, as well as the geological structure of the lunar surface, will be analyzed and studied.In addition, it can take color photographs of the lander using the Bayer color coding principle. It can observe the working status of the lander by switching between static image mode and dynamic video mode with automatic exposure time. The focal length of the lens on the panoramic camera is 50 mm and the field of view is 19.7?umination and viewing conditions, the largest signal-to-no×14.5?.Under the best illise ratio of the panoramic camera is 44 d B. Its static modulation transfer function is 0.33. A large number of ground testing experiments and on-orbit imaging results show that the functional interface of the panoramic camera works normally. The image quality of the panoramic camera is satisfactory. All the performance parameters of the panoramic camera satisfy the design requirements. 展开更多
关键词 camera rover lunar viewing satisfactory calibration Bayer correction rotating satisfy
下载PDF
Rigorous and integrated self-calibration model for a large-field-of-view camera using a star image
18
作者 Yinhu ZHAN Shaojie CHEN +1 位作者 Chao ZHANG Ruopu WANG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第12期375-389,共15页
This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclinat... This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclination of the optical axis is thoroughly considered with respect to the image plane,and a rigorous imaging model including 8 unknown intrinsic parameters is built.Second,the basic calibration equation based on star vector observations is presented.Third,the partial derivative expressions of all 11 camera parameters for linearizing the calibration equation are deduced in detail,and an iterative solution using the least squares method is given.Furtherly,simulation experiment is designed,results of which shows the new model has a better performance than the old model.At last,three experiments were conducted at night in central China and 671 valid star images were collected.The results indicate that the new method obtains a mean magnitude of reprojection error of 0.251 pixels at a 120°FoV,which improves the calibration accuracy by 38.6%compared with the old calibration model(not considering the inclination of the optical axis).When the FoV drops below 20°,the mean magnitude of the reprojection error decreases to 0.15 pixels for both the new model and the old model.Since stars instead of manual control points are used,the new method can realize self-calibration,which might be significant for the long-duration navigation of vehicles in some unfamiliar or extreme environments,such as those of Mars or Earth’s moon. 展开更多
关键词 camera calibration calibration model Imaging models Lens distortion Star image
原文传递
AstroPose:Astronaut pose estimation using a monocular camera during extravehicular activities
19
作者 LIU ZiBin LI You +4 位作者 WANG ChunHui LIU Liang GUAN BangLei SHANG Yang YU QiFeng 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第6期1933-1945,共13页
With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To... With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts. 展开更多
关键词 monocular camera astronaut pose estimation camera calibration
原文传递
High speed robust image registration and localization using optimized algorithm and its performances evaluation 被引量:13
20
作者 Meng An Zhiguo Jiang Danpei Zhao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第3期520-526,共7页
Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration whe... Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration when using these approaches.First,the algorithm is apt to be influenced by illumination.Second,algorithm should have less computational complexity.Third,the depth information of images needs to be estimated without other sensors.This paper investigates a famous local invariant feature named speeded up robust feature(SURF),and proposes a highspeed and robust image registration and localization algorithm based on it.With supports from feature tracking and pose estimation methods,the proposed algorithm can compute camera poses under different conditions of scale,viewpoint and rotation so as to precisely localize object's position.At last,the study makes registration experiment by scale invariant feature transform(SIFT),SURF and the proposed algorithm,and designs a method to evaluate their performances.Furthermore,this study makes object retrieval test on remote sensing video.For there is big deformation on remote sensing frames,the registration algorithm absorbs the Kanade-Lucas-Tomasi(KLT) 3-D coplanar calibration feature tracker methods,which can localize interesting targets precisely and efficiently.The experimental results prove that the proposed method has a higher localization speed and lower localization error rate than traditional visual simultaneous localization and mapping(vSLAM) in a period of time. 展开更多
关键词 local invariant features speeded up robust feature(SURF) Harris corner Kanada-Lucas-Tomasi(KLT) transform Coplanar camera calibration algorithm landmarks.
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部