To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. F...To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.展开更多
An efficient adaptive approximation demosaicking algorithm based on the sampled edge pattern was presented for mosaic images from Bayer color filter array. The proposed algorithm determined edge patterns by four neare...An efficient adaptive approximation demosaicking algorithm based on the sampled edge pattern was presented for mosaic images from Bayer color filter array. The proposed algorithm determined edge patterns by four nearest green values surrounding the green interpolation location. Then according to the edge patterns, different adaptive interpolation steps were applied. Simulations on 12 Kodak photos and 15 IMAX high-quality images showed that the proposed method outperformed the other four demosaicking methods (bilinear, effective color interpolation, Lu's method and Chen's method) for average color peak signal to noise ratios and maintained a relatively low complexity owing to constant color-difference interpolation step and a reasonable terminating condition of iteration.展开更多
<div style="text-align:justify;"> Focusing of an area array camera is an important step in making a high precision imaging camera. Its testing method needs special study. In this paper, a method of cam...<div style="text-align:justify;"> Focusing of an area array camera is an important step in making a high precision imaging camera. Its testing method needs special study. In this paper, a method of camera focusing is introduced. The defocusing depth of camera is calculated by using the frequency spectrum of defocused image. This method is especially suitable for the focusing of the Planar Array Camera, and avoids the complicated work of adjusting the focus plane of the planar array camera in the focusing process. </div>展开更多
Aiming at the problem that it is difficult to locate all the aperture positions of the large size component using Houghcircle detection method,this article presents a non-contact measurement method combining the integ...Aiming at the problem that it is difficult to locate all the aperture positions of the large size component using Houghcircle detection method,this article presents a non-contact measurement method combining the integral imaging technology withHough circle detection algorithm.Firstly,a set of integral imaging information acquisition algorithms were proposed accordingto the classical imaging theory.Secondly,the camera array experiment device was built by using two-dimensional translationstage and charge coupled device(CCD)camera.When the system is operating,element image array captured with the camera isused to achieve the positioning of the component aperture using Hough circle detection and coordinate acquisition algorithm.Based on the above theory,a verification experiment was carried out.The results show that the detection error of the componentaperture position is within0.3mm,which provides effective theoretical support for the application of integral imagingtechnology in high precision detection展开更多
Light field cameras are becoming popular in com- puter vision and graphics, with many research and commercial applications already having been proposed. Various types of cameras have been developed with the camera arr...Light field cameras are becoming popular in com- puter vision and graphics, with many research and commercial applications already having been proposed. Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image using multiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the fight field. The calibrated parameters are used in the fight field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and a moving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang's well-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.展开更多
Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical vi...Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical views between cameras are interpolated by depth image-based rendering technique. An improved technique for depth estimation reduces the estimation error and high-density light-field is obtained. The captured data is employed for the calculation of computer hologram using ray-sampling plane. This technique enables high-resolution display even in deep 3D scene although a hologram is calculated from ray information, and thus it makes use of the important advantage of holographic 3D display.展开更多
A novel 2-D cosmic ray position detector has been built and studied. It is integrated from a CsI(Na) crystal pixel array, an optical fiber array, an image intensifier and an ICCD camera. The 2-D positions of one cos...A novel 2-D cosmic ray position detector has been built and studied. It is integrated from a CsI(Na) crystal pixel array, an optical fiber array, an image intensifier and an ICCD camera. The 2-D positions of one cosmic ray track is determined by the location of a fired CsI(Na) pixel. The scintillation light of these 1.0× 1.0 mm CsI(Na) pixels is delivered to the image intensifier through fibers. The light information is recorded in the ICCD camera in the form of images, from which the 2-D positions can be reconstructed. The background noise and cosmic ray images have been studied. The study shows that the cosmic ray detection efficiency can reach up to 11.4%, while the false accept rate is less than 1%.展开更多
Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience.This technology enhances the interactivity and freedom of multimedia performances.However,many ...Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience.This technology enhances the interactivity and freedom of multimedia performances.However,many free-viewpoint video synthesis methods hardly satisfy the requirement to work in real time with high precision,particularly for sports fields having large areas and numerous moving objects.To address these issues,we propose a freeviewpoint video synthesis method based on distance field acceleration.The central idea is to fuse multiview distance field information and use it to adjust the search step size adaptively.Adaptive step size search is used in two ways:for fast estimation of multiobject three-dimensional surfaces,and synthetic view rendering based on global occlusion judgement.We have implemented our ideas using parallel computing for interactive display,using CUDA and OpenGL frameworks,and have used real-world and simulated experimental datasets for evaluation.The results show that the proposed method can render free-viewpoint videos with multiple objects on large sports fields at 25 fps.Furthermore,the visual quality of our synthetic novel viewpoint images exceeds that of state-of-the-art neural-rendering-based methods.展开更多
Inspired by the compound eyes of insects,many multi-aperture optical imaging systems have been proposed to improve the imaging quality,e.g.,to yield a high-resolution image or an image with a large field-ofview.Previo...Inspired by the compound eyes of insects,many multi-aperture optical imaging systems have been proposed to improve the imaging quality,e.g.,to yield a high-resolution image or an image with a large field-ofview.Previous research has reviewed existing multi-aperture optical imaging systems,but few papers emphasize the light field acquisition model which is essential to bridge the gap between configuration design and application.In this paper,we review typical multi-aperture optical imaging systems(i.e.,artificial compound eye,light field camera,and camera array),and then summarize general mathematical light field acquisition models for different configurations.These mathematical models provide methods for calculating the key indexes of a specific multiaperture optical imaging system,such as the field-of-view and sub-image overlap ratio.The mathematical tools simplify the quantitative design and evaluation of imaging systems for researchers.展开更多
In the surface imaging of underwater structures, long working distance will reduce image quality due to the turbidity of water. To acquire high definition and large field of view(FOV) images for surface detection, a s...In the surface imaging of underwater structures, long working distance will reduce image quality due to the turbidity of water. To acquire high definition and large field of view(FOV) images for surface detection, a short-working-distance underwater imaging system is proposed based on camera array. A multi-view calibration and rectification method is developed. A look-up table(LUT) method and a multi-resolution spline(MRS) method are applied to stitch array images real-time and seamlessly.Experiments both in the air and in the water are conducted. Strength and weakness of the LUT and MRS methods are discussed.Based on the results, the effectiveness in surface detection of underwater structures is verified.展开更多
Light field(LF)imaging has attracted attention because of its ability to solve computer vision problems.In this paper we briefly review the research progress in computer vision in recent years.For most factors that af...Light field(LF)imaging has attracted attention because of its ability to solve computer vision problems.In this paper we briefly review the research progress in computer vision in recent years.For most factors that affect computer vision development,the richness and accuracy of visual information acquisition are decisive.LF imaging technology has made great contributions to computer vision because it uses cameras or microlens arrays to record the position and direction information of light rays,acquiring complete three-dimensional(3D)scene information.LF imaging technology improves the accuracy of depth estimation,image segmentation,blending,fusion,and 3D reconstruction.LF has also been innovatively applied to iris and face recognition,identification of materials and fake pedestrians,acquisition of epipolar plane images,shape recovery,and LF microscopy.Here,we further summarize the existing problems and the development trends of LF imaging in computer vision,including the establishment and evaluation of the LF dataset,applications under high dynamic range(HDR)conditions,LF image enhancement,virtual reality,3D display,and 3D movies,military optical camouflage technology,image recognition at micro-scale,image processing method based on HDR,and the optimal relationship between spatial resolution and four-dimensional(4D)LF information acquisition.LF imaging has achieved great success in various studies.Over the past 25 years,more than 180 publications have reported the capability of LF imaging in solving computer vision problems.We summarize these reports to make it easier for researchers to search the detailed methods for specific solutions.展开更多
基金Sponsored by the National High Technology Research and Development Program of China(Grant No.863-2-5-1-13B)the Jilin Province Science and Technology Development Plan Item(Grant No.20130522107JH)
文摘To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.
基金Supported by National Natural Science Foundation of China(No.60975001 and No.61271412)
文摘An efficient adaptive approximation demosaicking algorithm based on the sampled edge pattern was presented for mosaic images from Bayer color filter array. The proposed algorithm determined edge patterns by four nearest green values surrounding the green interpolation location. Then according to the edge patterns, different adaptive interpolation steps were applied. Simulations on 12 Kodak photos and 15 IMAX high-quality images showed that the proposed method outperformed the other four demosaicking methods (bilinear, effective color interpolation, Lu's method and Chen's method) for average color peak signal to noise ratios and maintained a relatively low complexity owing to constant color-difference interpolation step and a reasonable terminating condition of iteration.
文摘<div style="text-align:justify;"> Focusing of an area array camera is an important step in making a high precision imaging camera. Its testing method needs special study. In this paper, a method of camera focusing is introduced. The defocusing depth of camera is calculated by using the frequency spectrum of defocused image. This method is especially suitable for the focusing of the Planar Array Camera, and avoids the complicated work of adjusting the focus plane of the planar array camera in the focusing process. </div>
基金National Natural Science Foundation of China(No.61172120)National Key Science Foundation of Tianjin(No.13JCZDJC34800)
文摘Aiming at the problem that it is difficult to locate all the aperture positions of the large size component using Houghcircle detection method,this article presents a non-contact measurement method combining the integral imaging technology withHough circle detection algorithm.Firstly,a set of integral imaging information acquisition algorithms were proposed accordingto the classical imaging theory.Secondly,the camera array experiment device was built by using two-dimensional translationstage and charge coupled device(CCD)camera.When the system is operating,element image array captured with the camera isused to achieve the positioning of the component aperture using Hough circle detection and coordinate acquisition algorithm.Based on the above theory,a verification experiment was carried out.The results show that the detection error of the componentaperture position is within0.3mm,which provides effective theoretical support for the application of integral imagingtechnology in high precision detection
文摘Light field cameras are becoming popular in com- puter vision and graphics, with many research and commercial applications already having been proposed. Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image using multiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the fight field. The calibrated parameters are used in the fight field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and a moving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang's well-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.
基金partly supported by the JSPS Grant-in-Aid for Scientific Research #17300032
文摘Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical views between cameras are interpolated by depth image-based rendering technique. An improved technique for depth estimation reduces the estimation error and high-density light-field is obtained. The captured data is employed for the calculation of computer hologram using ray-sampling plane. This technique enables high-resolution display even in deep 3D scene although a hologram is calculated from ray information, and thus it makes use of the important advantage of holographic 3D display.
文摘A novel 2-D cosmic ray position detector has been built and studied. It is integrated from a CsI(Na) crystal pixel array, an optical fiber array, an image intensifier and an ICCD camera. The 2-D positions of one cosmic ray track is determined by the location of a fired CsI(Na) pixel. The scintillation light of these 1.0× 1.0 mm CsI(Na) pixels is delivered to the image intensifier through fibers. The light information is recorded in the ICCD camera in the form of images, from which the 2-D positions can be reconstructed. The background noise and cosmic ray images have been studied. The study shows that the cosmic ray detection efficiency can reach up to 11.4%, while the false accept rate is less than 1%.
基金supported by the National Natural Science Foundation of China(Nos.62172315,62073262,and 61672429)the Fundamental Research Funds for the Central Universities,the Innovation Fund of Xidian University(No.20109205456)the Key Research and Development Program of Shaanxi(No.S2021-YF-ZDCXL-ZDLGY-0127),and HUAWEI.
文摘Free-viewpoint video allows the user to view objects from any virtual perspective,creating an immersive visual experience.This technology enhances the interactivity and freedom of multimedia performances.However,many free-viewpoint video synthesis methods hardly satisfy the requirement to work in real time with high precision,particularly for sports fields having large areas and numerous moving objects.To address these issues,we propose a freeviewpoint video synthesis method based on distance field acceleration.The central idea is to fuse multiview distance field information and use it to adjust the search step size adaptively.Adaptive step size search is used in two ways:for fast estimation of multiobject three-dimensional surfaces,and synthetic view rendering based on global occlusion judgement.We have implemented our ideas using parallel computing for interactive display,using CUDA and OpenGL frameworks,and have used real-world and simulated experimental datasets for evaluation.The results show that the proposed method can render free-viewpoint videos with multiple objects on large sports fields at 25 fps.Furthermore,the visual quality of our synthetic novel viewpoint images exceeds that of state-of-the-art neural-rendering-based methods.
基金the National Natural Science Foundation of China(No.62001482)the Hunan Provincial Natural Science Foundation of China(No.2021JJ40676)。
文摘Inspired by the compound eyes of insects,many multi-aperture optical imaging systems have been proposed to improve the imaging quality,e.g.,to yield a high-resolution image or an image with a large field-ofview.Previous research has reviewed existing multi-aperture optical imaging systems,but few papers emphasize the light field acquisition model which is essential to bridge the gap between configuration design and application.In this paper,we review typical multi-aperture optical imaging systems(i.e.,artificial compound eye,light field camera,and camera array),and then summarize general mathematical light field acquisition models for different configurations.These mathematical models provide methods for calculating the key indexes of a specific multiaperture optical imaging system,such as the field-of-view and sub-image overlap ratio.The mathematical tools simplify the quantitative design and evaluation of imaging systems for researchers.
基金supported by the National Key Technology R&D Program(Grant No.2014BAK11B04)the National Natural Science Foundation of China(Grant Nos.11272089,11327201,11532005&11602056)
文摘In the surface imaging of underwater structures, long working distance will reduce image quality due to the turbidity of water. To acquire high definition and large field of view(FOV) images for surface detection, a short-working-distance underwater imaging system is proposed based on camera array. A multi-view calibration and rectification method is developed. A look-up table(LUT) method and a multi-resolution spline(MRS) method are applied to stitch array images real-time and seamlessly.Experiments both in the air and in the water are conducted. Strength and weakness of the LUT and MRS methods are discussed.Based on the results, the effectiveness in surface detection of underwater structures is verified.
基金Project supported by the National Natural Science Foundation of China(Nos.61906133,62020106004,and 92048301)。
文摘Light field(LF)imaging has attracted attention because of its ability to solve computer vision problems.In this paper we briefly review the research progress in computer vision in recent years.For most factors that affect computer vision development,the richness and accuracy of visual information acquisition are decisive.LF imaging technology has made great contributions to computer vision because it uses cameras or microlens arrays to record the position and direction information of light rays,acquiring complete three-dimensional(3D)scene information.LF imaging technology improves the accuracy of depth estimation,image segmentation,blending,fusion,and 3D reconstruction.LF has also been innovatively applied to iris and face recognition,identification of materials and fake pedestrians,acquisition of epipolar plane images,shape recovery,and LF microscopy.Here,we further summarize the existing problems and the development trends of LF imaging in computer vision,including the establishment and evaluation of the LF dataset,applications under high dynamic range(HDR)conditions,LF image enhancement,virtual reality,3D display,and 3D movies,military optical camouflage technology,image recognition at micro-scale,image processing method based on HDR,and the optimal relationship between spatial resolution and four-dimensional(4D)LF information acquisition.LF imaging has achieved great success in various studies.Over the past 25 years,more than 180 publications have reported the capability of LF imaging in solving computer vision problems.We summarize these reports to make it easier for researchers to search the detailed methods for specific solutions.