The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo m...The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.展开更多
High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial rem...High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial remote sensing cameras have been developed vigorously throughout the world over the last few decades, resulting in resolutions down to 0.31 m. In 2010, the Chinese government approved the implementation of the China High-resolution Earth Observation System(CHEOS) Major Special Project, giving priority to development of high resolution remote sensing satellites. More than half of CHEOS has been constructed to date and 5 satellites operate in orbit. These cameras have different characteristics. A number of innovative technologies have been adopted, which have led to camera performance increasing in leaps and bounds. The products and the production capability enables the remote sensing technical level to increase making it on a par with Europe and the US.展开更多
To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded ...To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.展开更多
Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the air...Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the aircraft motion information, but the six-(degree of freedom)(6-DOF) motion still couldn't be accurately estimated by existing methods. The purpose of this work is to provide a motion estimation method based on optical flow from forward and down looking cameras, which doesn't rely on the assumption of level flight. First, the distribution and decoupling method of optical flow from forward camera are utilized to get attitude. Then, the resulted angular velocities are utilized to obtain the translational optical flow of the down camera, which can eliminate the influence of rotational motion on velocity estimation. Besides, the translational motion estimation equation is simplified by establishing the relation between the depths of feature points and the aircraft altitude. Finally, simulation results show that the method presented is accurate and robust.展开更多
This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, w...This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.展开更多
A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cam...A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality.展开更多
This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target....This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.展开更多
This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale...This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.展开更多
THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese natio...THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese national culture, boosting national morale, and bringing the splendid culture of China to the world.The initiator, chief planner and chief director of this program is Li Dongge, a graduate of the Xi’an University of展开更多
This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda ...This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda underground mall. Lately spatiotemporal distribution of pedestrians has being increasingly important in the field of urban planning, disaster prevention planning, marketing and so on. Although many researchers have tried to capture the information of location as dealing with some sensors, some problems still remain, such as the investment of sensors, the restriction of the number of people who has the device they are able to capture. From such background, we develop an original labelling algorithm and estimate the spatiotemporal distribution of pedestrians and the information of the passing time and the direction of pedestrians from sequential images of a watch camera.展开更多
AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent...AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.展开更多
Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting w...Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting with the basic data structure,this survey reviews the latest developments of 3D modeling based on depth cameras,including research works on camera tracking,3D object and scene reconstruction,and high-quality texture reconstruction.We also discuss the future work and possible solutions for 3D modeling based on the depth camera.展开更多
Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will ...Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.展开更多
Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision...Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging.展开更多
To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusi...To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusing on both feature representation and human tracking mechanism.Feature representation describes individual by using both improved local appearance descriptors and statistical geometric parameters.The improved feature descriptors can be extracted quickly and make the human feature more discriminative.Adaptive human tracking mechanism is based on feature representation and it arranges the human image blobs in field of view into matrix.Primary appearance models are created to include the maximum inter-camera appearance information captured from different visual angles.The persons appeared in camera are first filtered by statistical geometric parameters.Then the one among the filtered persons who has the maximum matching scale with the primary models is determined to be the target person.Subsequently,the image blobs of the target person are used to update and generate new primary appearance models for the next camera,thus being robust to visual angle changes.Experimental results prove the excellence of the feature representation and show the good generalization capability of tracking mechanism as well as its robustness to condition variables.展开更多
Mixing of a thermal plasma jet with the surrounding atmosphere was studied using two CCD cameras (PCO SensiCam) situated detecting simultaneously the radiation of argon and nitrogen. The evaluation of image differen...Mixing of a thermal plasma jet with the surrounding atmosphere was studied using two CCD cameras (PCO SensiCam) situated detecting simultaneously the radiation of argon and nitrogen. The evaluation of image differences between two records showed that the location of regions on plasma jet boundaries characterised by stronger nitrogen radiation changes with the plasma flow rate. Close-to-laminar flow results in a small mixing rate and consequently low nitrogen optical emission on plasma jet boundaries. The increase of the flow rate leads to the formation of a relatively thick and stable layer on the boundaries characterised by strong nitrogen radiation. Further enhancement of the flow rate results in the formation of unstable regions of excited nitrogen molecules moving along the jet.展开更多
A traditional single-pixel camera needs a large number of measurements to reconstruct the object with compressive sensing computation.Compared with the 1/0 matrices in classical measurement,the 1/-1 matrices in the co...A traditional single-pixel camera needs a large number of measurements to reconstruct the object with compressive sensing computation.Compared with the 1/0 matrices in classical measurement,the 1/-1 matrices in the complementary measurement has better property for reconstruction computation and returns better reconstruction results.However,each row of the 1/-1 matrices needs two measurements with the traditional single-pixel camera which results into double measurements compared with the 1/0 matrices.In this paper,we consider the pseudo complementary measurement which only takes the same amount of measurements with the row number of some properly designed 1/0 matrix to compute the total luminous flux of the objective and derives the measurement data of the corresponding 1/-1 matrix in a mathematical way.The numerical simulation and experimental result show that the pseudo complementary measurement is an efficient tool for the traditional single-pixel camera imaging under low measurement rate,which can combine the advantages of the classical and complementary measurements and significantly improve the peak signal-to-noise ratio.展开更多
Digital still camera is a completely typical tool for capturing the digital images. With the development of IC technology and optimization-algorithm, the performance of digital still cameras(DSCs) will be more and mor...Digital still camera is a completely typical tool for capturing the digital images. With the development of IC technology and optimization-algorithm, the performance of digital still cameras(DSCs) will be more and more powerful in the world. But can we obtain the more and better info using the combined information from the multi-digital still camera? The answer is yes by some experiments. By using multi-DSC at different angles, the various 3-D informations of the object are obtained.展开更多
To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. F...To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.展开更多
Background: Taxicab drivers have high homicide rates compared to all worker occupations. To help taxi fleets select effective taxicab security cameras, this project tested eight sample taxicab security cameras for det...Background: Taxicab drivers have high homicide rates compared to all worker occupations. To help taxi fleets select effective taxicab security cameras, this project tested eight sample taxicab security cameras for determining their photographic quality which correlated to the effectiveness of in-taxicab facial identification. Methods: Five photographic quality metric thresholds: 1) resolution, 2) highlight dynamic range, 3) shadow dynamic range, 4) lens distortion, and 5) shutter speed, were employed to evaluate the photographic quality of the sample cameras. Waterproof tests and fire-resistive tests on recording memory cards were conducted to determine the memory card survivability in water and simulated fire. Results: The Full-HD (1920 × 1080 pixels), HD (1280 × 720 pixels) and dual-lens VGA (2 × 640 × 480 pixels with wide-angle and telephoto lenses) cameras performed well in resolution tests in daylight conditions. The resolution of a single-lens VGA (640 × 480 pixels) camera did not meet the resolution minimum requirements. All of the recording memory cards passed the five-meter/72-hour waterproof test. A fire resistant chamber made with one fire insulation material could protect a single memory card at 538°C/1000°F for a five-minute simulated fire test. Conclusions: Single-lens VGA-resolution (640 × 480 pixels) cameras are not suggested for use as security cameras in taxicabs with two or more rows of seats. The recording memory cards can survive 5-meter/72-hour waterproof tests. The memory card chamber built with an existing heat insulation material can protect an individual memory card during 538°C?(1000°F)/5-minute fire resistance oven-test.展开更多
基金Supported by the National Natural Science Foundation of China(42221002,42171432)Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Fundamental Research Funds for the Central Universities.
文摘The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.
文摘High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial remote sensing cameras have been developed vigorously throughout the world over the last few decades, resulting in resolutions down to 0.31 m. In 2010, the Chinese government approved the implementation of the China High-resolution Earth Observation System(CHEOS) Major Special Project, giving priority to development of high resolution remote sensing satellites. More than half of CHEOS has been constructed to date and 5 satellites operate in orbit. These cameras have different characteristics. A number of innovative technologies have been adopted, which have led to camera performance increasing in leaps and bounds. The products and the production capability enables the remote sensing technical level to increase making it on a par with Europe and the US.
文摘To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.
基金Project(2012CB720003)supported by the National Basic Research Program of ChinaProjects(61320106010,61127007,61121003,61573019)supported by the National Natural Science Foundation of ChinaProject(2013DFE13040)supported by the Special Program for International Science and Technology Cooperation from Ministry of Science and Technology of China
文摘Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the aircraft motion information, but the six-(degree of freedom)(6-DOF) motion still couldn't be accurately estimated by existing methods. The purpose of this work is to provide a motion estimation method based on optical flow from forward and down looking cameras, which doesn't rely on the assumption of level flight. First, the distribution and decoupling method of optical flow from forward camera are utilized to get attitude. Then, the resulted angular velocities are utilized to obtain the translational optical flow of the down camera, which can eliminate the influence of rotational motion on velocity estimation. Besides, the translational motion estimation equation is simplified by establishing the relation between the depths of feature points and the aircraft altitude. Finally, simulation results show that the method presented is accurate and robust.
基金This work was supported by Grant-in-Aid for Scientific Research (C) (No.17500119)
文摘This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.
基金Project supported by the National Natural Science Foundation of China (No. 60772134)the Innovation Foundation of Xidian University,China (No. Chuang 05018)
文摘A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality.
文摘This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.
文摘This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.
文摘THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese national culture, boosting national morale, and bringing the splendid culture of China to the world.The initiator, chief planner and chief director of this program is Li Dongge, a graduate of the Xi’an University of
基金Partially Supported by Grant-in-Aid for Scientific Research(A)(No.25240004)
文摘This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda underground mall. Lately spatiotemporal distribution of pedestrians has being increasingly important in the field of urban planning, disaster prevention planning, marketing and so on. Although many researchers have tried to capture the information of location as dealing with some sensors, some problems still remain, such as the investment of sensors, the restriction of the number of people who has the device they are able to capture. From such background, we develop an original labelling algorithm and estimate the spatiotemporal distribution of pedestrians and the information of the passing time and the direction of pedestrians from sequential images of a watch camera.
文摘AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.
基金National Natural Science Foundation of China(61732016).
文摘Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting with the basic data structure,this survey reviews the latest developments of 3D modeling based on depth cameras,including research works on camera tracking,3D object and scene reconstruction,and high-quality texture reconstruction.We also discuss the future work and possible solutions for 3D modeling based on the depth camera.
基金National Natural Science Foundation of China(11805269)West Light Talent Training Plan of the Chinese Academy of Sciences(2022-XBQNXZ-010)Science and Technology Innovation Leading Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCLJ0042)。
文摘Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.
基金Sponsored by the National Natural Science Foundation of China(Grant No.60902067)the Foundation for Science & Technology Research Project of Jilin Province(Grant No.11ZDGG001)
文摘Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging.
基金funded by the Natural Science Foundation of Jiangsu Province(No.BK2012389)the National Natural Science Foundation of China(Nos.71303110,91024024)the Foundation of Graduate Innovation Center in NUAA(Nos.kfjj201471,kfjj201473)
文摘To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusing on both feature representation and human tracking mechanism.Feature representation describes individual by using both improved local appearance descriptors and statistical geometric parameters.The improved feature descriptors can be extracted quickly and make the human feature more discriminative.Adaptive human tracking mechanism is based on feature representation and it arranges the human image blobs in field of view into matrix.Primary appearance models are created to include the maximum inter-camera appearance information captured from different visual angles.The persons appeared in camera are first filtered by statistical geometric parameters.Then the one among the filtered persons who has the maximum matching scale with the primary models is determined to be the target person.Subsequently,the image blobs of the target person are used to update and generate new primary appearance models for the next camera,thus being robust to visual angle changes.Experimental results prove the excellence of the feature representation and show the good generalization capability of tracking mechanism as well as its robustness to condition variables.
基金the Czech Science Foundation under the contract 202/05/0728
文摘Mixing of a thermal plasma jet with the surrounding atmosphere was studied using two CCD cameras (PCO SensiCam) situated detecting simultaneously the radiation of argon and nitrogen. The evaluation of image differences between two records showed that the location of regions on plasma jet boundaries characterised by stronger nitrogen radiation changes with the plasma flow rate. Close-to-laminar flow results in a small mixing rate and consequently low nitrogen optical emission on plasma jet boundaries. The increase of the flow rate leads to the formation of a relatively thick and stable layer on the boundaries characterised by strong nitrogen radiation. Further enhancement of the flow rate results in the formation of unstable regions of excited nitrogen molecules moving along the jet.
基金Project supported by the National Key Research and Development Program of China(Grant No.2018YFB0504302)the Youth Innovation Promotion Association of Chinese Academy of Sciencesthe National Natural Science Foundation of China(Grant Nos.11701545,11971466,and 11991021).
文摘A traditional single-pixel camera needs a large number of measurements to reconstruct the object with compressive sensing computation.Compared with the 1/0 matrices in classical measurement,the 1/-1 matrices in the complementary measurement has better property for reconstruction computation and returns better reconstruction results.However,each row of the 1/-1 matrices needs two measurements with the traditional single-pixel camera which results into double measurements compared with the 1/0 matrices.In this paper,we consider the pseudo complementary measurement which only takes the same amount of measurements with the row number of some properly designed 1/0 matrix to compute the total luminous flux of the objective and derives the measurement data of the corresponding 1/-1 matrix in a mathematical way.The numerical simulation and experimental result show that the pseudo complementary measurement is an efficient tool for the traditional single-pixel camera imaging under low measurement rate,which can combine the advantages of the classical and complementary measurements and significantly improve the peak signal-to-noise ratio.
文摘Digital still camera is a completely typical tool for capturing the digital images. With the development of IC technology and optimization-algorithm, the performance of digital still cameras(DSCs) will be more and more powerful in the world. But can we obtain the more and better info using the combined information from the multi-digital still camera? The answer is yes by some experiments. By using multi-DSC at different angles, the various 3-D informations of the object are obtained.
基金Sponsored by the National High Technology Research and Development Program of China(Grant No.863-2-5-1-13B)the Jilin Province Science and Technology Development Plan Item(Grant No.20130522107JH)
文摘To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.
文摘Background: Taxicab drivers have high homicide rates compared to all worker occupations. To help taxi fleets select effective taxicab security cameras, this project tested eight sample taxicab security cameras for determining their photographic quality which correlated to the effectiveness of in-taxicab facial identification. Methods: Five photographic quality metric thresholds: 1) resolution, 2) highlight dynamic range, 3) shadow dynamic range, 4) lens distortion, and 5) shutter speed, were employed to evaluate the photographic quality of the sample cameras. Waterproof tests and fire-resistive tests on recording memory cards were conducted to determine the memory card survivability in water and simulated fire. Results: The Full-HD (1920 × 1080 pixels), HD (1280 × 720 pixels) and dual-lens VGA (2 × 640 × 480 pixels with wide-angle and telephoto lenses) cameras performed well in resolution tests in daylight conditions. The resolution of a single-lens VGA (640 × 480 pixels) camera did not meet the resolution minimum requirements. All of the recording memory cards passed the five-meter/72-hour waterproof test. A fire resistant chamber made with one fire insulation material could protect a single memory card at 538°C/1000°F for a five-minute simulated fire test. Conclusions: Single-lens VGA-resolution (640 × 480 pixels) cameras are not suggested for use as security cameras in taxicabs with two or more rows of seats. The recording memory cards can survive 5-meter/72-hour waterproof tests. The memory card chamber built with an existing heat insulation material can protect an individual memory card during 538°C?(1000°F)/5-minute fire resistance oven-test.