To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of t...To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.展开更多
It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibra...It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.展开更多
A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology base...A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.展开更多
Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transform...Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectivelv, which can meet the requirements of close-range photogrammetry with high accuracy.展开更多
Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation co...Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation complexity at the same time. In this paper, a new technique is proposed to calibrate camera. Firstly, the global calibration method is described in de-tail. It requires the camera to observe a checkerboard pattern shown at a few different orientations. The checkerboard corners are obtained by Harris algorithm. With direct linear transformation and non-linear optimal algorithm, the global calibration pa-rameters are obtained. Then, a sub-regional method is proposed. Those corners are divided into two groups, middle corners and edge corners, which are used to calibrate the corresponding area to get two sets of calibration parameters. Finally, some experimental images are used to test the proposed method. Experimental results demonstrate that the average projection error of sub-region method is decreased at least 16% compared with the global calibration method. The proposed technique is simple and accurate. It is suitable for the industrial computer vision measurement.展开更多
RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of...RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.展开更多
The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The E...The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mech- anism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geomet- ric calibration, the absolute photometric calibration and the relative photometric cal- ibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.展开更多
Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control poi...Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control points with variable Z values. Experiments show that the approach presented is effective for reconstructing 3D color objects in computer vision system.展开更多
This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tica...This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tical and accurate technique of camera calibration from profiles alone has been developed. Compared with traditional techniques for camera calibration, for inst ance, it may involve taking images of some precisely machined calibration patter n (such as a calibration grid), or edge detection for determining vanish points which are often far from images center or even do not physically exist, or calcu lation of fundamental matrix and Kruppa equations which can be numerically unsta ble, the method presented here uses just profiles of revolution, which are commo nly found in daily life (e.g. bowls and vases), to make the process easier as a result of the reduced cost and increased accessibility of the calibration object s. This paper firstly analyzes the relationship between the symmetry property of profile of revolution and the intrinsic parameters of a camera, and then shows how to use images of profile of revolution to provide enough information for det ermining intrinsic parameters. During the process, high-accurate profile extrac tion algorithm has also been used. Finally, results from real data are presented , demonstrating the efficiency and accuracy of the proposed methods.展开更多
The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balan...The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balancing its model weight and multi-parametric distributions to the required accuracy. A novel measuring instrument of space manipulator is designed to orbital simulative motion and locational accuracy test. The camera system of space manipulator, calibrated by MOGA algorithm, is used to locational accuracy test in this measuring instrument. The experimental result shows that the absolute errors are [0.07, 1.75] mm for MOGA calibrating model, [2.88, 5.95] mm for MN method, and [1.19, 4.83] mm for LM method. Besides, the composite errors both of LM method and MN method are approximately seven times higher that of MOGA calibrating model. It is suggested that the MOGA calibrating model is superior both to LM method and MN method.展开更多
A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed usin...A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT. A novel algorithm to obtain the initial value of principal point is put forward. Proof of Critical Motion Sequences for calibration is given in detail. The practical decomposition algorithm of exterior parameters using initial values of principal point, focal length and 2D-DLT parameters is discussed elaborately. Planar\|scene camera calibration algorithm with bundle adjustment is addressed. Very good results have been obtained with both computer simulations and real data calibration. The calibration result can be used in some high precision applications, such as reverse engineering and industrial inspection.展开更多
The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from...The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.展开更多
The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the...The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the principle of binocular stereo vision. By rotating and pitching the mast, it can take several photographs of the patrol area. After stitching these images, panoramic images of the scenes will be obtained.Thus the topography and geomorphology of the patrol area and the impact crater, as well as the geological structure of the lunar surface, will be analyzed and studied.In addition, it can take color photographs of the lander using the Bayer color coding principle. It can observe the working status of the lander by switching between static image mode and dynamic video mode with automatic exposure time. The focal length of the lens on the panoramic camera is 50 mm and the field of view is 19.7?umination and viewing conditions, the largest signal-to-no×14.5?.Under the best illise ratio of the panoramic camera is 44 d B. Its static modulation transfer function is 0.33. A large number of ground testing experiments and on-orbit imaging results show that the functional interface of the panoramic camera works normally. The image quality of the panoramic camera is satisfactory. All the performance parameters of the panoramic camera satisfy the design requirements.展开更多
With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To...With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.展开更多
Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration whe...Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration when using these approaches.First,the algorithm is apt to be influenced by illumination.Second,algorithm should have less computational complexity.Third,the depth information of images needs to be estimated without other sensors.This paper investigates a famous local invariant feature named speeded up robust feature(SURF),and proposes a highspeed and robust image registration and localization algorithm based on it.With supports from feature tracking and pose estimation methods,the proposed algorithm can compute camera poses under different conditions of scale,viewpoint and rotation so as to precisely localize object's position.At last,the study makes registration experiment by scale invariant feature transform(SIFT),SURF and the proposed algorithm,and designs a method to evaluate their performances.Furthermore,this study makes object retrieval test on remote sensing video.For there is big deformation on remote sensing frames,the registration algorithm absorbs the Kanade-Lucas-Tomasi(KLT) 3-D coplanar calibration feature tracker methods,which can localize interesting targets precisely and efficiently.The experimental results prove that the proposed method has a higher localization speed and lower localization error rate than traditional visual simultaneous localization and mapping(vSLAM) in a period of time.展开更多
A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning...A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning level, a motion control level and a servo control level. The last three levels form a local real-time open robot controller, which realizes motion planning and motion control of a robot. A camera calibration method based on the relative movement of the end-effector connected to a robot is proposed and a method for tracking weld seam based on the structured light stereovision is provided. Combining the parameters of the cameras and laser plane, three groups of position values in Cartesian space are obtained for each feature point in a stripe projected on the weld seam. The accurate three-dimensional position of the edge points in the weld seam can be calculated from the obtained parameters with an information fusion algorithm. By calculating the weld seam parameter from position and image data, the movement parameters of the robot used for tracking can be determined. A swing welding experiment of type V groove weld is successfully conducted, the results of which show that the system has high resolution seam tracking in real-time, and works stably and efficiently.展开更多
Exactly capturing three dimensional (3D) motion i nf ormation of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision s yst...Exactly capturing three dimensional (3D) motion i nf ormation of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision s ystem and a method for determining 3D motion parameters of an object from binocu lar sequence images are introduced. The main steps include camera calibration, t he matching of motion and stereo images, 3D feature point correspondences and re solving the motion parameters. Finally, the experimental results of acquiring th e motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned meth od are presented.展开更多
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ...Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.展开更多
In order to quickly and efficiently get the information of the bottom of the shoe pattern and spraying trajectory, the paper proposes a method based on binocular stereo vision. After acquiring target image, edge detec...In order to quickly and efficiently get the information of the bottom of the shoe pattern and spraying trajectory, the paper proposes a method based on binocular stereo vision. After acquiring target image, edge detection based on the canny algorithm, the paper begins stereo matching based on area and characteristics of algorithm. To eliminate false matching points, the paper uses the principle of polar geometry in computer vision. For the purpose of gaining the 3D point cloud of spraying curve, the paper adopts the principle of binocular stereo vision 3D measurement, and then carries on cubic spline curve fitting. By HALCON image processing software programming, it proves the feasibility and effectiveness of the method展开更多
In this paper, a distortion correction method with reduced complexity is proposed. With the singleparameter division model, the initial approximation of distortion parameters and the distortion center can be calibrate...In this paper, a distortion correction method with reduced complexity is proposed. With the singleparameter division model, the initial approximation of distortion parameters and the distortion center can be calibrated. Based on the distance from the image center to the fitting lines of the extracted curves, a bending measurement function with a weighted factor is proposed to optimize the initial value. Simulation and experiments verify the proposed method.展开更多
基金supported by the Aerospace Science and Technology Joint Fund(6141B061505)the National Natural Science Foundation of China(61473100).
文摘To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.
文摘It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.
文摘A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.
基金Project 2005A030 supported by the Youth Science and Research Foundation from China University of Mining & Technology
文摘Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectivelv, which can meet the requirements of close-range photogrammetry with high accuracy.
基金Tianjin Research Program of Application Foundation and Advanced Technology(No.14JCYBJC18600,No.14JCZDJC39700)the National Key Scientific Instrument and Equipment Development Project(No.2013YQ17053903)
文摘Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation complexity at the same time. In this paper, a new technique is proposed to calibrate camera. Firstly, the global calibration method is described in de-tail. It requires the camera to observe a checkerboard pattern shown at a few different orientations. The checkerboard corners are obtained by Harris algorithm. With direct linear transformation and non-linear optimal algorithm, the global calibration pa-rameters are obtained. Then, a sub-regional method is proposed. Those corners are divided into two groups, middle corners and edge corners, which are used to calibrate the corresponding area to get two sets of calibration parameters. Finally, some experimental images are used to test the proposed method. Experimental results demonstrate that the average projection error of sub-region method is decreased at least 16% compared with the global calibration method. The proposed technique is simple and accurate. It is suitable for the industrial computer vision measurement.
基金National Natural Science Foundation of China(41801379)。
文摘RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.
文摘The process of development and calibration for the first Moon-based ex- treme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mech- anism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geomet- ric calibration, the absolute photometric calibration and the relative photometric cal- ibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.
基金Supported by the Natural Science Foundation of China (69775022)the State High-Technology Development program of China(863 306ZT04 06 3)
文摘Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control points with variable Z values. Experiments show that the approach presented is effective for reconstructing 3D color objects in computer vision system.
文摘This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tical and accurate technique of camera calibration from profiles alone has been developed. Compared with traditional techniques for camera calibration, for inst ance, it may involve taking images of some precisely machined calibration patter n (such as a calibration grid), or edge detection for determining vanish points which are often far from images center or even do not physically exist, or calcu lation of fundamental matrix and Kruppa equations which can be numerically unsta ble, the method presented here uses just profiles of revolution, which are commo nly found in daily life (e.g. bowls and vases), to make the process easier as a result of the reduced cost and increased accessibility of the calibration object s. This paper firstly analyzes the relationship between the symmetry property of profile of revolution and the intrinsic parameters of a camera, and then shows how to use images of profile of revolution to provide enough information for det ermining intrinsic parameters. During the process, high-accurate profile extrac tion algorithm has also been used. Finally, results from real data are presented , demonstrating the efficiency and accuracy of the proposed methods.
基金Project(J132012C001)supported by Technological Foundation of ChinaProject(2011YQ04013606)supported by National Major Scientific Instrument & Equipment Developing Projects,China
文摘The multi-objective genetic algorithm(MOGA) is proposed to calibrate the non-linear camera model of a space manipulator to improve its locational accuracy. This algorithm can optimize the camera model by dynamic balancing its model weight and multi-parametric distributions to the required accuracy. A novel measuring instrument of space manipulator is designed to orbital simulative motion and locational accuracy test. The camera system of space manipulator, calibrated by MOGA algorithm, is used to locational accuracy test in this measuring instrument. The experimental result shows that the absolute errors are [0.07, 1.75] mm for MOGA calibrating model, [2.88, 5.95] mm for MN method, and [1.19, 4.83] mm for LM method. Besides, the composite errors both of LM method and MN method are approximately seven times higher that of MOGA calibrating model. It is suggested that the MOGA calibrating model is superior both to LM method and MN method.
文摘A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed. The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT. A novel algorithm to obtain the initial value of principal point is put forward. Proof of Critical Motion Sequences for calibration is given in detail. The practical decomposition algorithm of exterior parameters using initial values of principal point, focal length and 2D-DLT parameters is discussed elaborately. Planar\|scene camera calibration algorithm with bundle adjustment is addressed. Very good results have been obtained with both computer simulations and real data calibration. The calibration result can be used in some high precision applications, such as reverse engineering and industrial inspection.
基金Science and Technology on Electro-Optic Control Laboratory and the Fund of Aeronautical Science(No.201951048001)。
文摘The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.
文摘The Chang'e-3 panoramic camera, which is composed of two cameras with identical functions, performances and interfaces, is installed on the lunar rover mast. It can acquire 3D images of the lunar surface based on the principle of binocular stereo vision. By rotating and pitching the mast, it can take several photographs of the patrol area. After stitching these images, panoramic images of the scenes will be obtained.Thus the topography and geomorphology of the patrol area and the impact crater, as well as the geological structure of the lunar surface, will be analyzed and studied.In addition, it can take color photographs of the lander using the Bayer color coding principle. It can observe the working status of the lander by switching between static image mode and dynamic video mode with automatic exposure time. The focal length of the lens on the panoramic camera is 50 mm and the field of view is 19.7?umination and viewing conditions, the largest signal-to-no×14.5?.Under the best illise ratio of the panoramic camera is 44 d B. Its static modulation transfer function is 0.33. A large number of ground testing experiments and on-orbit imaging results show that the functional interface of the panoramic camera works normally. The image quality of the panoramic camera is satisfactory. All the performance parameters of the panoramic camera satisfy the design requirements.
基金supported by Hunan Provincial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the Science Foundation(Grant No.KY0505072204)+1 种基金the Foundation of National Key Laboratory of Human Factors Engineering(Grant Nos.GJSD22006,6142222210401)the Foundation of China Astronaut Research and Training Center(Grant No.2022SY54B0605)。
文摘With the completion of the Chinese space station,an increasing number of extravehicular activities will be executed by astronauts,which is regarded as one of the most dangerous activities in human space exploration.To guarantee the safety of astronauts and the successful accomplishment of missions,it is vital to determine the pose of astronauts during extravehicular activities.This article presents a monocular vision-based pose estimation method of astronauts during extravehicular activities,making full use of the available observation resources.First,the camera is calibrated using objects of known structures,such as the spacesuit backpack or the circular handrail outside the space station.Subsequently,the pose estimation is performed utilizing the feature points on the spacesuit.The proposed methods are validated both on synthetic and semi-physical simulation experiments,demonstrating the high precision of the camera calibration and pose estimation.To further evaluate the performance of the methods in real-world scenarios,we utilize image sequences of Shenzhou-13 astronauts during extravehicular activities.The experiments validate that camera calibration and pose estimation can be accomplished solely with the existing observation resources,without requiring additional complicated equipment.The motion parameters of astronauts lay the technological foundation for subsequent applications such as mechanical analysis,task planning,and ground training of astronauts.
基金supported by the National Natural Science Foundation of China (60802043)the National Basic Research Program of China(973 Program) (2010CB327900)
文摘Local invariant algorithm applied in downward-looking image registration,usually computes the camera's pose relative to visual landmarks.Generally,there are three requirements in the process of image registration when using these approaches.First,the algorithm is apt to be influenced by illumination.Second,algorithm should have less computational complexity.Third,the depth information of images needs to be estimated without other sensors.This paper investigates a famous local invariant feature named speeded up robust feature(SURF),and proposes a highspeed and robust image registration and localization algorithm based on it.With supports from feature tracking and pose estimation methods,the proposed algorithm can compute camera poses under different conditions of scale,viewpoint and rotation so as to precisely localize object's position.At last,the study makes registration experiment by scale invariant feature transform(SIFT),SURF and the proposed algorithm,and designs a method to evaluate their performances.Furthermore,this study makes object retrieval test on remote sensing video.For there is big deformation on remote sensing frames,the registration algorithm absorbs the Kanade-Lucas-Tomasi(KLT) 3-D coplanar calibration feature tracker methods,which can localize interesting targets precisely and efficiently.The experimental results prove that the proposed method has a higher localization speed and lower localization error rate than traditional visual simultaneous localization and mapping(vSLAM) in a period of time.
基金This work was supported by the National High Technology Research and Development Program of China under Grant 2002AA422160 by the National Key Fundamental Research and the Devel-opment Project of China (973) under Grant 2002CB312200.
文摘A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning level, a motion control level and a servo control level. The last three levels form a local real-time open robot controller, which realizes motion planning and motion control of a robot. A camera calibration method based on the relative movement of the end-effector connected to a robot is proposed and a method for tracking weld seam based on the structured light stereovision is provided. Combining the parameters of the cameras and laser plane, three groups of position values in Cartesian space are obtained for each feature point in a stripe projected on the weld seam. The accurate three-dimensional position of the edge points in the weld seam can be calculated from the obtained parameters with an information fusion algorithm. By calculating the weld seam parameter from position and image data, the movement parameters of the robot used for tracking can be determined. A swing welding experiment of type V groove weld is successfully conducted, the results of which show that the system has high resolution seam tracking in real-time, and works stably and efficiently.
文摘Exactly capturing three dimensional (3D) motion i nf ormation of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision s ystem and a method for determining 3D motion parameters of an object from binocu lar sequence images are introduced. The main steps include camera calibration, t he matching of motion and stereo images, 3D feature point correspondences and re solving the motion parameters. Finally, the experimental results of acquiring th e motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned meth od are presented.
文摘Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.
文摘In order to quickly and efficiently get the information of the bottom of the shoe pattern and spraying trajectory, the paper proposes a method based on binocular stereo vision. After acquiring target image, edge detection based on the canny algorithm, the paper begins stereo matching based on area and characteristics of algorithm. To eliminate false matching points, the paper uses the principle of polar geometry in computer vision. For the purpose of gaining the 3D point cloud of spraying curve, the paper adopts the principle of binocular stereo vision 3D measurement, and then carries on cubic spline curve fitting. By HALCON image processing software programming, it proves the feasibility and effectiveness of the method
文摘In this paper, a distortion correction method with reduced complexity is proposed. With the singleparameter division model, the initial approximation of distortion parameters and the distortion center can be calibrated. Based on the distance from the image center to the fitting lines of the extracted curves, a bending measurement function with a weighted factor is proposed to optimize the initial value. Simulation and experiments verify the proposed method.