To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of t...To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.展开更多
A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology base...A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.展开更多
RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of...RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.展开更多
A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed usin...A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT. A novel algorithm to obtain the initial value of principal point is put forward. Proof of Critical Motion Sequences for calibration is given in detail. The practical decomposition algorithm of exterior parameters using initial values of principal point, focal length and 2D-DLT parameters is discussed elaborately. Planar\|scene camera calibration algorithm with bundle adjustment is addressed. Very good results have been obtained with both computer simulations and real data calibration. The calibration result can be used in some high precision applications, such as reverse engineering and industrial inspection.展开更多
Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the...Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30%higher than those of existing methods.展开更多
Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking di...Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.展开更多
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middl...A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.展开更多
The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration...The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in which camera parameters are determined by a set of 3D lines.A set of constraints is derived on camera parameters in terms of perspective line mapping.From these con- straints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Lin,Huang and Faugeras for camera location determination in which at least 8 line correspondences are re- quired for linear computation of camera location.Since line segments in an image can be located easi- ly and more accurately than points,the use of lines as calibration reference tends to ease the compu- tation in image preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.展开更多
A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibra...A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibration procedure is based on two arbitrary feature points of the environment, and three pure translational motions and two rotational motions of robot endeffector are needed. New linear solution equations are deduced,and the calibration parameters are finally solved accurately and effectively. The proposed algorithm has been verified by simulated data with different noise and disturbance. Because of the need of fewer feature points and robot motions, the proposed method greatly improves the efficiency and practicality of the calibration procedure.展开更多
The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-i...The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-in,resulting in a relatively unreliable calibration accuracy.Here we demonstrate a multi-aperture micro-electro-mechanical system(MEMS)light lead-in device that is located at the optical focal plane of the collimator used to calibrate the geometric distortion in cameras.Without additional volume or power consumption,the random errors of this calibration system are decreased by the multi-image matrix.With this new construction and a method for implementing the system,the reliability of high-accuracy calibration of optical cameras is guaranteed.展开更多
A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected li...A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected lines with known slope to gain the focal length and orientation parameters of a camera. A set of equations that computes related camera parameters has been derived from geometric properties of the calibration pattern. With accurate analytical implementation, precision of the approach is only decided by accuracy of the calibration target selecting. Final experimental results have showed its validity by a snapshot from real automatic visual traffic surveillance (AVTS) scencs.展开更多
A camera calibration algorithm based on self-made target is proposedin this paper, which can solve the difficulty of making high precision 3Dtarget. Theself-made target consists of two intersecting chess board. With t...A camera calibration algorithm based on self-made target is proposedin this paper, which can solve the difficulty of making high precision 3Dtarget. Theself-made target consists of two intersecting chess board. With the classic scalemethod, the 3D coordinates of selected points in the target were derived from thedistance matrix. The element in distance matrix is the distance between every twopoints, which can be obtained by measurement. The spatial location precision ofpoints in the target was ensured by measurement instead of manufacturing, whichreduced the production cost and the requirements for the production accuracygreatly. Camera calibration was completed using 3D target based method. It canbe further extended to the applications where the target cannot be produced. Theexperimental results show the validity of this method.展开更多
It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibra...It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.展开更多
The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The E...The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mech- anism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geomet- ric calibration, the absolute photometric calibration and the relative photometric cal- ibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.展开更多
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ...Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.展开更多
The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the...The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the principle of binocular stereo vision. By rotating and pitching the mast, it can take several photographs of the patrol area. After stitching these images, panoramic images of the scenes will be obtained.Thus the topography and geomorphology of the patrol area and the impact crater, as well as the geological structure of the lunar surface, will be analyzed and studied.In addition, it can take color photographs of the lander using the Bayer color coding principle. It can observe the working status of the lander by switching between static image mode and dynamic video mode with automatic exposure time. The focal length of the lens on the panoramic camera is 50 mm and the field of view is 19.7?umination and viewing conditions, the largest signal-to-no×14.5?.Under the best illise ratio of the panoramic camera is 44 d B. Its static modulation transfer function is 0.33. A large number of ground testing experiments and on-orbit imaging results show that the functional interface of the panoramic camera works normally. The image quality of the panoramic camera is satisfactory. All the performance parameters of the panoramic camera satisfy the design requirements.展开更多
This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclinat...This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclination of the optical axis is thoroughly considered with respect to the image plane,and a rigorous imaging model including 8 unknown intrinsic parameters is built.Second,the basic calibration equation based on star vector observations is presented.Third,the partial derivative expressions of all 11 camera parameters for linearizing the calibration equation are deduced in detail,and an iterative solution using the least squares method is given.Furtherly,simulation experiment is designed,results of which shows the new model has a better performance than the old model.At last,three experiments were conducted at night in central China and 671 valid star images were collected.The results indicate that the new method obtains a mean magnitude of reprojection error of 0.251 pixels at a 120°FoV,which improves the calibration accuracy by 38.6%compared with the old calibration model(not considering the inclination of the optical axis).When the FoV drops below 20°,the mean magnitude of the reprojection error decreases to 0.15 pixels for both the new model and the old model.Since stars instead of manual control points are used,the new method can realize self-calibration,which might be significant for the long-duration navigation of vehicles in some unfamiliar or extreme environments,such as those of Mars or Earth’s moon.展开更多
With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To...With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.展开更多
Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration whe...Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration when using these approaches.First,the algorithm is apt to be influenced by illumination.Second,algorithm should have less computational complexity.Third,the depth information of images needs to be estimated without other sensors.This paper investigates a famous local invariant feature named speeded up robust feature(SURF),and proposes a highspeed and robust image registration and localization algorithm based on it.With supports from feature tracking and pose estimation methods,the proposed algorithm can compute camera poses under different conditions of scale,viewpoint and rotation so as to precisely localize object's position.At last,the study makes registration experiment by scale invariant feature transform(SIFT),SURF and the proposed algorithm,and designs a method to evaluate their performances.Furthermore,this study makes object retrieval test on remote sensing video.For there is big deformation on remote sensing frames,the registration algorithm absorbs the Kanade-Lucas-Tomasi(KLT) 3-D coplanar calibration feature tracker methods,which can localize interesting targets precisely and efficiently.The experimental results prove that the proposed method has a higher localization speed and lower localization error rate than traditional visual simultaneous localization and mapping(vSLAM) in a period of time.展开更多
基金supported by the Aerospace Science and Technology Joint Fund(6141B061505)the National Natural Science Foundation of China(61473100).
文摘To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.
文摘A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.
基金National Natural Science Foundation of China(41801379)。
文摘RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.
文摘A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT. A novel algorithm to obtain the initial value of principal point is put forward. Proof of Critical Motion Sequences for calibration is given in detail. The practical decomposition algorithm of exterior parameters using initial values of principal point, focal length and 2D-DLT parameters is discussed elaborately. Planar\|scene camera calibration algorithm with bundle adjustment is addressed. Very good results have been obtained with both computer simulations and real data calibration. The calibration result can be used in some high precision applications, such as reverse engineering and industrial inspection.
文摘Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30%higher than those of existing methods.
文摘Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.
基金supported by National High Technology Research and Development Program of China(863 Program)(No. 2011AA110301)National Natural Science Foundation of China(No. 61079001)the Ph. D. Programs Foundation of Ministry of Education of China(No. 20111103110017)
文摘A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.
文摘The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in which camera parameters are determined by a set of 3D lines.A set of constraints is derived on camera parameters in terms of perspective line mapping.From these con- straints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Lin,Huang and Faugeras for camera location determination in which at least 8 line correspondences are re- quired for linear computation of camera location.Since line segments in an image can be located easi- ly and more accurately than points,the use of lines as calibration reference tends to ease the compu- tation in image preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.
基金supported by the National Natural Science Foundation of China(61379097,61401463,61100098)Youth Innovation Promotion Association CAS
文摘A novel and effective self-calibration approach for robot vision is presented, which can effectively estimate both the camera intrinsic parameters and the hand-eye transformation at the same time. The proposed calibration procedure is based on two arbitrary feature points of the environment, and three pure translational motions and two rotational motions of robot endeffector are needed. New linear solution equations are deduced,and the calibration parameters are finally solved accurately and effectively. The proposed algorithm has been verified by simulated data with different noise and disturbance. Because of the need of fewer feature points and robot motions, the proposed method greatly improves the efficiency and practicality of the calibration procedure.
基金This work is supported by National Science Foundation of China(no.61505093,61505190)the National Key Research and Development Plan(2016YFC0103600).
文摘The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-in,resulting in a relatively unreliable calibration accuracy.Here we demonstrate a multi-aperture micro-electro-mechanical system(MEMS)light lead-in device that is located at the optical focal plane of the collimator used to calibrate the geometric distortion in cameras.Without additional volume or power consumption,the random errors of this calibration system are decreased by the multi-image matrix.With this new construction and a method for implementing the system,the reliability of high-accuracy calibration of optical cameras is guaranteed.
基金This work was supported by the auspice of National Key Project for Basic Research on Urban Traffic Monitoring and Management System, PRA SI01-01,G1998030408
文摘A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected lines with known slope to gain the focal length and orientation parameters of a camera. A set of equations that computes related camera parameters has been derived from geometric properties of the calibration pattern. With accurate analytical implementation, precision of the approach is only decided by accuracy of the calibration target selecting. Final experimental results have showed its validity by a snapshot from real automatic visual traffic surveillance (AVTS) scencs.
文摘A camera calibration algorithm based on self-made target is proposedin this paper, which can solve the difficulty of making high precision 3Dtarget. Theself-made target consists of two intersecting chess board. With the classic scalemethod, the 3D coordinates of selected points in the target were derived from thedistance matrix. The element in distance matrix is the distance between every twopoints, which can be obtained by measurement. The spatial location precision ofpoints in the target was ensured by measurement instead of manufacturing, whichreduced the production cost and the requirements for the production accuracygreatly. Camera calibration was completed using 3D target based method. It canbe further extended to the applications where the target cannot be produced. Theexperimental results show the validity of this method.
文摘It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.
文摘The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mech- anism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geomet- ric calibration, the absolute photometric calibration and the relative photometric cal- ibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.
文摘Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.
文摘The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the principle of binocular stereo vision. By rotating and pitching the mast, it can take several photographs of the patrol area. After stitching these images, panoramic images of the scenes will be obtained.Thus the topography and geomorphology of the patrol area and the impact crater, as well as the geological structure of the lunar surface, will be analyzed and studied.In addition, it can take color photographs of the lander using the Bayer color coding principle. It can observe the working status of the lander by switching between static image mode and dynamic video mode with automatic exposure time. The focal length of the lens on the panoramic camera is 50 mm and the field of view is 19.7?umination and viewing conditions, the largest signal-to-no×14.5?.Under the best illise ratio of the panoramic camera is 44 d B. Its static modulation transfer function is 0.33. A large number of ground testing experiments and on-orbit imaging results show that the functional interface of the panoramic camera works normally. The image quality of the panoramic camera is satisfactory. All the performance parameters of the panoramic camera satisfy the design requirements.
基金co-supported by the National Natural Science Foundation of China(Nos.42074013 and 41704006)。
文摘This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclination of the optical axis is thoroughly considered with respect to the image plane,and a rigorous imaging model including 8 unknown intrinsic parameters is built.Second,the basic calibration equation based on star vector observations is presented.Third,the partial derivative expressions of all 11 camera parameters for linearizing the calibration equation are deduced in detail,and an iterative solution using the least squares method is given.Furtherly,simulation experiment is designed,results of which shows the new model has a better performance than the old model.At last,three experiments were conducted at night in central China and 671 valid star images were collected.The results indicate that the new method obtains a mean magnitude of reprojection error of 0.251 pixels at a 120°FoV,which improves the calibration accuracy by 38.6%compared with the old calibration model(not considering the inclination of the optical axis).When the FoV drops below 20°,the mean magnitude of the reprojection error decreases to 0.15 pixels for both the new model and the old model.Since stars instead of manual control points are used,the new method can realize self-calibration,which might be significant for the long-duration navigation of vehicles in some unfamiliar or extreme environments,such as those of Mars or Earth’s moon.
基金supported by Hunan Provincial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the Science Foundation(Grant No.KY0505072204)+1 种基金the Foundation of National Key Laboratory of Human Factors Engineering(Grant Nos.GJSD22006,6142222210401)the Foundation of China Astronaut Research and Training Center(Grant No.2022SY54B0605)。
文摘With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.
基金supported by the National Natural Science Foundation of China (60802043)the National Basic Research Program of China(973 Program) (2010CB327900)
文摘Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration when using these approaches.First,the algorithm is apt to be influenced by illumination.Second,algorithm should have less computational complexity.Third,the depth information of images needs to be estimated without other sensors.This paper investigates a famous local invariant feature named speeded up robust feature(SURF),and proposes a highspeed and robust image registration and localization algorithm based on it.With supports from feature tracking and pose estimation methods,the proposed algorithm can compute camera poses under different conditions of scale,viewpoint and rotation so as to precisely localize object's position.At last,the study makes registration experiment by scale invariant feature transform(SIFT),SURF and the proposed algorithm,and designs a method to evaluate their performances.Furthermore,this study makes object retrieval test on remote sensing video.For there is big deformation on remote sensing frames,the registration algorithm absorbs the Kanade-Lucas-Tomasi(KLT) 3-D coplanar calibration feature tracker methods,which can localize interesting targets precisely and efficiently.The experimental results prove that the proposed method has a higher localization speed and lower localization error rate than traditional visual simultaneous localization and mapping(vSLAM) in a period of time.