A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Real-time,contact-free temperature monitoring of low to medium range(30℃-150℃)has been extensively used in industry and agriculture,which is usually realized by costly infrared temperature detection methods.This pap...Real-time,contact-free temperature monitoring of low to medium range(30℃-150℃)has been extensively used in industry and agriculture,which is usually realized by costly infrared temperature detection methods.This paper proposes an alternative approach of extracting temperature information in real time from the visible light images of the monitoring target using a convolutional neural network(CNN).A mean-square error of<1.119℃was reached in the temperature measurements of low to medium range using the CNN and the visible light images.Imaging angle and imaging distance do not affect the temperature detection using visible optical images by the CNN.Moreover,the CNN has a certain illuminance generalization ability capable of detection temperature information from the images which were collected under different illuminance and were not used for training.Compared to the conventional machine learning algorithms mentioned in the recent literatures,this real-time,contact-free temperature measurement approach that does not require any further image processing operations facilitates temperature monitoring applications in the industrial and civil fields.展开更多
We report an experimental demonstration of two-dimensional(2D) lensless ghost imaging with true thermal light. An electrodeless discharge lamp with a higher light intensity than the hollow cathode lamp used before i...We report an experimental demonstration of two-dimensional(2D) lensless ghost imaging with true thermal light. An electrodeless discharge lamp with a higher light intensity than the hollow cathode lamp used before is employed as a light source. The main problem encountered by the 2D lensless ghost imaging with true thermal light is that its coherence time is much shorter than the resolution time of the detection system. To overcome this difficulty we derive a method based on the relationship between the true and measured values of the second-order optical intensity correlation, by which means the visibility of the ghost image can be dramatically enhanced. This method would also be suitable for ghost imaging with natural sunlight.展开更多
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeN...In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.展开更多
Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion im...Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.展开更多
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar...The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.展开更多
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc...Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.展开更多
Total green leaf area(GLA)is an important trait for agronomic studies.However,existing methods for estimating the GLA of individual rice plants are destructive and labor-intensive.A nondestructive method for estimatin...Total green leaf area(GLA)is an important trait for agronomic studies.However,existing methods for estimating the GLA of individual rice plants are destructive and labor-intensive.A nondestructive method for estimating the total GLA of individual rice plants based on multi-angle color images is presented.Using projected areas of the plant in images,linear,quadratic,exponential and power regression models for estimating total GLA were evaluated.Tests demonstrated that the side-view projected area had a stronger relationship with the actual total leaf area than the top-projected area.And power models fit better than other models.In addition,the use of multiple side-view images was an efficient method for reducing the estimation error.The inclusion of the top-view projected area as a seoond predictor provided only a slight improvement of the total leaf area est imation.When the projected areas from multi angle images were used,the estimated leaf area(ELA)using the power model and the actual leaf area had a high correlation cofficient(R2>0.98),and the mean absolute percentage error(MAPE)was about 6%.The method was capable of estimating the total leaf area in a nondestructive,accurate and eficient manner,and it may be used for monitoring rice plant growth.展开更多
We experimentally demonstrate a novel ghost imaging experiment utilizing a classical light source, capable of resolving objects with a high visibility. The experimental results show that our scheme can indeed realize ...We experimentally demonstrate a novel ghost imaging experiment utilizing a classical light source, capable of resolving objects with a high visibility. The experimental results show that our scheme can indeed realize ghost imaging with high visibility for a relatively complicated object composed of three near-ellipse-shaped holes with different dimensions. In our experiment, the largest hole is -36 times of the smMlest one in area. Each of the three holes exhibits high-visibility in excess of 80%. The high visibility and high spatial-resolution advantages of this technique could have applications in remote sensing.展开更多
China successfully launched FY-3D by a LM-4C carrier rocket from the Taiyuan Satellite Launch Center at 02:35 Beijing time on November 15.The mission also carried the HEAD-1experiment satellite which was developed by...China successfully launched FY-3D by a LM-4C carrier rocket from the Taiyuan Satellite Launch Center at 02:35 Beijing time on November 15.The mission also carried the HEAD-1experiment satellite which was developed by SAST.The LM-4C carrier rocket was developed by SAST.22 technological improvements were made for this launch mission to meet the satellite’s requirement and improve the flight reliability.So far,展开更多
Objective To establish a 3D atlas of the lenticular nuclei and its subnucleus with the cryosection images of the male from "Atlas of Chinese Visible Human". Methods The lenticular nuclei and its subnucleus w...Objective To establish a 3D atlas of the lenticular nuclei and its subnucleus with the cryosection images of the male from "Atlas of Chinese Visible Human". Methods The lenticular nuclei and its subnucleus were segmented from the cryosection images and reconstructed with the software展开更多
Integrating deformable mirrors within the optical train of an adaptive telescope was one of the major innovations in astronomical observation technology,distinguished by its high optical throughput,reduced optical sur...Integrating deformable mirrors within the optical train of an adaptive telescope was one of the major innovations in astronomical observation technology,distinguished by its high optical throughput,reduced optical surfaces,and the incorporation of the deformable mirror.Typically,voice-coil actuators are used,which require additional position sensors,internal control electronics,and cooling systems,leading to a very complex structure.Piezoelectric deformable secondary mirror technologies were proposed to overcome these problems.Recently,a high-order piezoelectric deformable secondary mirror has been developed and installed on the 1.8-m telescope at Lijiang Observatory in China to make it an adaptive telescope.The system consists of a 241-actuator piezoelectric deformable secondary mirror,a 192-sub-aperture Shack-Hartmann wavefront sensor,and a multi-core-based real-time controller.The actuator spacing of the PDSM measures 19.3 mm,equivalent to approximately 12.6 cm when mapped onto the primary mirror,significantly less than the voicecoil-based adaptive telescopes such as LBT,Magellan and VLT.As a result,stellar images with Strehl ratios above 0.49 in the R band have been obtained.To our knowledge,these are the highest R band images captured by an adaptive telescope with deformable secondary mirrors.Here,we report the system description and on-sky performance of this adaptive telescope.展开更多
This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive...This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells'mechanism.The research presented here is segmented into three components.In the first segment,we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels:day and night.The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images.For the daytime images,where visible light information is predominant,we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel,respectively.Conversely,for nighttime images where infrared information takes precedence,the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel.The outcome is a pseudo-color fused image.The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information.These images are then compared with those obtained by six other methods cited in the relevant reference.The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency.Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results.Four sets of fused images did not perform as well as the comparison in the QAB/F index.In conclusion,the fused images generated through the proposed method show superior performance in terms of scene detail,visual perception,and image sharpness when compared with their counterparts from other methods.展开更多
Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared...Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics.展开更多
A visible light imaging Thomson scattering (VIS-TVTS) diagnostic system has been developed for the measurement of plasma electron temperature on the HT-7 tokamak. The system contains a Nd:YAG laser (A = 532 nm, re...A visible light imaging Thomson scattering (VIS-TVTS) diagnostic system has been developed for the measurement of plasma electron temperature on the HT-7 tokamak. The system contains a Nd:YAG laser (A = 532 nm, repetition rate 10 Hz, total pulse duration ≈ 10 ns, pulse energy 〉 1.0 J), a grating spectrometer, an image intensifier (I.I.) lens coupled with an electron multiplying CCD (EMCCD) and a data acquisition and analysis system. In this paper, the measurement capability of the system is analyzed. In addition to the performance of the system, the capability of measuring plasma electron temperature has been proved. The profile of electron temperature is presented with a spatial resolution of about 0.96 cm (seven points) near the center of the plasma.展开更多
Steady-state plasma generated by electron cyclotron resonance (ECR) wave in the KT5D magnetized torus was studied using a fast high-resolution camera and Langmuir probes. It was found that both the discharge pattern...Steady-state plasma generated by electron cyclotron resonance (ECR) wave in the KT5D magnetized torus was studied using a fast high-resolution camera and Langmuir probes. It was found that both the discharge patterns taken by the camera and the plasma parameters measured by the probes were very sensitive to the working gas pressure and the magnetic configuration of the torus both without and with vertical fields. There existed fast vertical motion of the plasma. Tentative discussion is presented about the observed phenomena such as the bright resonance layer at a high gas pressure and the wave absorption mechanism at a low pressure. Further explanations should be found.展开更多
A wide-viewing-angle visible light imaging system (VLIS) was mounted on the Joint Texas Experimental Tokamak (J-TEXT) to monitor the discharge process. It is proposed that by using the film data recorded the plasm...A wide-viewing-angle visible light imaging system (VLIS) was mounted on the Joint Texas Experimental Tokamak (J-TEXT) to monitor the discharge process. It is proposed that by using the film data recorded the plasma vertical displacement can be estimated. In this paper installation and operation of the VLIS are presented in detailed. The estimated result is further compared with that measured by using an array of magnetic pickup coils. Their consistency verifies that the estimation of the plasma vertical displacement in J-TEXT by using the imaging data is promising.展开更多
Two-dimensional(2D)transition metal dichalcogenides have been extensively studied due to their fascinating physical properties for constructing high-performance photodetectors.However,their relatively low responsiviti...Two-dimensional(2D)transition metal dichalcogenides have been extensively studied due to their fascinating physical properties for constructing high-performance photodetectors.However,their relatively low responsivities,current on/off ratios and response speeds have hindered their widespread application.Herein,we fabricated a high-performance photodetector based on few-layer MoTe_(2) and CdS_(0.42)Se_(0.58) flake heterojunctions.The photodetector exhibited a high responsivity of 7221 A/W,a large current on/off ratio of 1.73×10^(4),a fast response speed of 90/120μs,external quantum efficiency(EQE)reaching up to 1.52×10^(6)%and detectivity(D*)reaching up to 1.67×10^(15) Jones.The excellent performance of the heterojunction photodetector was analyzed by a photocurrent mapping test and first-principle calculations.Notably,the visible light imaging function was successfully attained on the MoTe_(2)/CdS_(0.42)Se_(0.58) photodetectors,indicating that the device had practical imaging application prospects.Our findings provide a reference for the design of ultrahighperformance MoTe_(2)-based photodetectors.展开更多
Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(th...Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(the rattlesnake bimodal cell fusion mechanism and the visual receptive field model)is proposed.The innovation point of the proposed model lies in the following three features:first,the introduction of a simple mathematical model of the visual receptive field reduce computational complexity;second,the enhanced image is obtained by extracting the common information and unique information of source images,which improves fusion image quality;and third,the Waxman typical fusion structure is improved for the pseudo color image fusion model.The performance of the image fusion model is verified through comparative experiments.In the subjective visual evaluation,we find that the color of the fusion image obtained through the proposed model is natural and can highlight the target and scene details.In the objective quantitative evaluation,we observe that the best values on the four indicators,namely standard deviation,average gradient,entropy,and spatial frequency,accounts for 90%,100%,90%,and 100%,respectively,indicating that the fusion image exhibits superior contrast,image clarity,information content,and overall activity.Experimental results reveal that the performance of the proposed model is superior to that of other models and thus verified the validity and reliability of the model.展开更多
Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectr...Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectrum images for spontaneous facial expression recognition. First, the ac- tive appearance model AAM parameters and three defined head motion features are extracted from visible spectrum im- ages, and several thermal statistical features are extracted from infrared (IR) images. Second, feature selection is per- formed using the F-test statistic. Third, Bayesian networks BNs and support vector machines SVMs are proposed for both decision-level and feature-level fusion. Experiments on the natural visible and infrared facial expression (NVIE) spontaneous database show the effectiveness of the proposed methods, and demonstrate thermal 1R images' supplementary role for visible facial expression recognition.展开更多
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.61975072 and 12174173)the Natural Science Foundation of Fujian Province,China (Grant Nos.2022H0023,2022J02047,ZZ2023J20,and 2022G02006)。
文摘Real-time,contact-free temperature monitoring of low to medium range(30℃-150℃)has been extensively used in industry and agriculture,which is usually realized by costly infrared temperature detection methods.This paper proposes an alternative approach of extracting temperature information in real time from the visible light images of the monitoring target using a convolutional neural network(CNN).A mean-square error of<1.119℃was reached in the temperature measurements of low to medium range using the CNN and the visible light images.Imaging angle and imaging distance do not affect the temperature detection using visible optical images by the CNN.Moreover,the CNN has a certain illuminance generalization ability capable of detection temperature information from the images which were collected under different illuminance and were not used for training.Compared to the conventional machine learning algorithms mentioned in the recent literatures,this real-time,contact-free temperature measurement approach that does not require any further image processing operations facilitates temperature monitoring applications in the industrial and civil fields.
基金supported by the National Natural Science Foundation of China(Grant Nos.11204117,11304007,and 60907031)the China Postdoctoral Science Foundation(Grant No.2013M540146)+1 种基金the Fund from the Education Department of Liaoning Province,China(Grant No.L2012001)the National HiTech Research and Development Program of China(Grant No.2013AA122902)
文摘We report an experimental demonstration of two-dimensional(2D) lensless ghost imaging with true thermal light. An electrodeless discharge lamp with a higher light intensity than the hollow cathode lamp used before is employed as a light source. The main problem encountered by the 2D lensless ghost imaging with true thermal light is that its coherence time is much shorter than the resolution time of the detection system. To overcome this difficulty we derive a method based on the relationship between the true and measured values of the second-order optical intensity correlation, by which means the visibility of the ghost image can be dramatically enhanced. This method would also be suitable for ghost imaging with natural sunlight.
基金supported by the National Natural Science Foundation of China(No.61301211)the China Scholarship Council(No.201906835017)
文摘In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.
基金supported in part by the National Natural Science Foundation of China under Grant 41505017.
文摘Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.
基金Project supported by the National Natural Science Foundation of China(Grant No.61402368)Aerospace Support Fund,China(Grant No.2017-HT-XGD)Aerospace Science and Technology Innovation Foundation,China(Grant No.2017 ZD 53047)
文摘The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.
基金supported by the China Postdoctoral Science Foundation Funded Project(No.2021M690385)the National Natural Science Foundation of China(No.62101045).
文摘Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.
基金supported by grants from the National Program on High Technology Development (2013AA102403)the National Program for Basic Research of China (2012CB114305)+2 种基金the National Natural Science Foundation of China (30921091,31200274)the Program for New Century Excellent Talents in University (No.NCET-10-0386)the Fundamental Research Funds for the Central Universities (No.2013PY034).
文摘Total green leaf area(GLA)is an important trait for agronomic studies.However,existing methods for estimating the GLA of individual rice plants are destructive and labor-intensive.A nondestructive method for estimating the total GLA of individual rice plants based on multi-angle color images is presented.Using projected areas of the plant in images,linear,quadratic,exponential and power regression models for estimating total GLA were evaluated.Tests demonstrated that the side-view projected area had a stronger relationship with the actual total leaf area than the top-projected area.And power models fit better than other models.In addition,the use of multiple side-view images was an efficient method for reducing the estimation error.The inclusion of the top-view projected area as a seoond predictor provided only a slight improvement of the total leaf area est imation.When the projected areas from multi angle images were used,the estimated leaf area(ELA)using the power model and the actual leaf area had a high correlation cofficient(R2>0.98),and the mean absolute percentage error(MAPE)was about 6%.The method was capable of estimating the total leaf area in a nondestructive,accurate and eficient manner,and it may be used for monitoring rice plant growth.
基金Supported by the National Basic Research Program of China under Grant No 2012CB921900the National Natural Science Foundation of China under Grant Nos 11534006,11274183 and 11374166the National Scientific Instrument and Equipment Development Project under Grant No 2012YQ17004
文摘We experimentally demonstrate a novel ghost imaging experiment utilizing a classical light source, capable of resolving objects with a high visibility. The experimental results show that our scheme can indeed realize ghost imaging with high visibility for a relatively complicated object composed of three near-ellipse-shaped holes with different dimensions. In our experiment, the largest hole is -36 times of the smMlest one in area. Each of the three holes exhibits high-visibility in excess of 80%. The high visibility and high spatial-resolution advantages of this technique could have applications in remote sensing.
文摘China successfully launched FY-3D by a LM-4C carrier rocket from the Taiyuan Satellite Launch Center at 02:35 Beijing time on November 15.The mission also carried the HEAD-1experiment satellite which was developed by SAST.The LM-4C carrier rocket was developed by SAST.22 technological improvements were made for this launch mission to meet the satellite’s requirement and improve the flight reliability.So far,
文摘Objective To establish a 3D atlas of the lenticular nuclei and its subnucleus with the cryosection images of the male from "Atlas of Chinese Visible Human". Methods The lenticular nuclei and its subnucleus were segmented from the cryosection images and reconstructed with the software
基金funded by the National Natural Science Foundation of China(No.11733005,12173041,11727805)Youth Innovation Promotion Association,Chinese Academy of Sciences(No.2020376)Frontier Research Fund of Institute of Optics and Electronics,Chinese Academy of Sciences(No.C21K002).
文摘Integrating deformable mirrors within the optical train of an adaptive telescope was one of the major innovations in astronomical observation technology,distinguished by its high optical throughput,reduced optical surfaces,and the incorporation of the deformable mirror.Typically,voice-coil actuators are used,which require additional position sensors,internal control electronics,and cooling systems,leading to a very complex structure.Piezoelectric deformable secondary mirror technologies were proposed to overcome these problems.Recently,a high-order piezoelectric deformable secondary mirror has been developed and installed on the 1.8-m telescope at Lijiang Observatory in China to make it an adaptive telescope.The system consists of a 241-actuator piezoelectric deformable secondary mirror,a 192-sub-aperture Shack-Hartmann wavefront sensor,and a multi-core-based real-time controller.The actuator spacing of the PDSM measures 19.3 mm,equivalent to approximately 12.6 cm when mapped onto the primary mirror,significantly less than the voicecoil-based adaptive telescopes such as LBT,Magellan and VLT.As a result,stellar images with Strehl ratios above 0.49 in the R band have been obtained.To our knowledge,these are the highest R band images captured by an adaptive telescope with deformable secondary mirrors.Here,we report the system description and on-sky performance of this adaptive telescope.
基金supported by the National Natural Science Foundation of China(NSFC)under grant numbers 61201368Jilin Province Science and technology Department key research and development projecty Research and Development(grant no.20230201043GX).
文摘This study,grounded in Waxman fusion method,introduces an algorithm for the fusion of visible and infrared images,tailored to a two-level lighting environment,inspired by the mathematical model of the visual receptive field of rattlesnakes and the two-mode cells'mechanism.The research presented here is segmented into three components.In the first segment,we design a preprocessing module to judge the ambient light intensity and divide the lighting environment into two levels:day and night.The second segment proposes two distinct network structures designed specifically for these daytime and nighttime images.For the daytime images,where visible light information is predominant,we feed the ON-VIS signal and the IR-enhanced visual signal into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel,respectively.Conversely,for nighttime images where infrared information takes precedence,the ON-IR signal and the Visual-enhanced IR signal are separately input into the central excitation and surrounding suppression regions of the ON-center receptive field in the B channel.The outcome is a pseudo-color fused image.The third segment employs five different no-reference image quality assessment methods to evaluate the quality of thirteen sets of pseudo-color images produced by fusing infrared and visible information.These images are then compared with those obtained by six other methods cited in the relevant reference.The empirical results indicate that this study's outcomes surpass the comparative results in terms of average gradient and spatial frequency.Only one or two sets of fused images underperformed in terms of standard deviation and entropy when compared to the control results.Four sets of fused images did not perform as well as the comparison in the QAB/F index.In conclusion,the fused images generated through the proposed method show superior performance in terms of scene detail,visual perception,and image sharpness when compared with their counterparts from other methods.
基金supported by the project of CSG Electric Power Research Institute(Grant No.SEPRI-K22B100)。
文摘Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics.
基金supported by National Natural Science Foundation of China(Nos.11075187,11275233)National Magnetic Confinement Fusion Science Program of China(Nos.2013GB112003,2011GB101003)
文摘A visible light imaging Thomson scattering (VIS-TVTS) diagnostic system has been developed for the measurement of plasma electron temperature on the HT-7 tokamak. The system contains a Nd:YAG laser (A = 532 nm, repetition rate 10 Hz, total pulse duration ≈ 10 ns, pulse energy 〉 1.0 J), a grating spectrometer, an image intensifier (I.I.) lens coupled with an electron multiplying CCD (EMCCD) and a data acquisition and analysis system. In this paper, the measurement capability of the system is analyzed. In addition to the performance of the system, the capability of measuring plasma electron temperature has been proved. The profile of electron temperature is presented with a spatial resolution of about 0.96 cm (seven points) near the center of the plasma.
基金the National Science Foundation of China (Nos.10235010,10335060)Funds from the Ministry of Educationthe Academy of Science of China
文摘Steady-state plasma generated by electron cyclotron resonance (ECR) wave in the KT5D magnetized torus was studied using a fast high-resolution camera and Langmuir probes. It was found that both the discharge patterns taken by the camera and the plasma parameters measured by the probes were very sensitive to the working gas pressure and the magnetic configuration of the torus both without and with vertical fields. There existed fast vertical motion of the plasma. Tentative discussion is presented about the observed phenomena such as the bright resonance layer at a high gas pressure and the wave absorption mechanism at a low pressure. Further explanations should be found.
基金supported in part by the National 973 Project of China (No.2008CB717805)National Natural Science Foundation of China (No.50907029)
文摘A wide-viewing-angle visible light imaging system (VLIS) was mounted on the Joint Texas Experimental Tokamak (J-TEXT) to monitor the discharge process. It is proposed that by using the film data recorded the plasma vertical displacement can be estimated. In this paper installation and operation of the VLIS are presented in detailed. The estimated result is further compared with that measured by using an array of magnetic pickup coils. Their consistency verifies that the estimation of the plasma vertical displacement in J-TEXT by using the imaging data is promising.
基金This work was supported by the National Natural Science Foundation of China(Nos.11864046 and 11764046)the Basic Research Program of Yunnan Province(Nos.202001AT070064 and 202101AT070124)+1 种基金the Spring City Plan(Highlevel Talent Promotion and Training Project of Kunming)(No.2022SCP005)Yunnan Expert Workstation(No.202205AF150008).
文摘Two-dimensional(2D)transition metal dichalcogenides have been extensively studied due to their fascinating physical properties for constructing high-performance photodetectors.However,their relatively low responsivities,current on/off ratios and response speeds have hindered their widespread application.Herein,we fabricated a high-performance photodetector based on few-layer MoTe_(2) and CdS_(0.42)Se_(0.58) flake heterojunctions.The photodetector exhibited a high responsivity of 7221 A/W,a large current on/off ratio of 1.73×10^(4),a fast response speed of 90/120μs,external quantum efficiency(EQE)reaching up to 1.52×10^(6)%and detectivity(D*)reaching up to 1.67×10^(15) Jones.The excellent performance of the heterojunction photodetector was analyzed by a photocurrent mapping test and first-principle calculations.Notably,the visible light imaging function was successfully attained on the MoTe_(2)/CdS_(0.42)Se_(0.58) photodetectors,indicating that the device had practical imaging application prospects.Our findings provide a reference for the design of ultrahighperformance MoTe_(2)-based photodetectors.
基金supported by the National Natural Science Foundation of China(NSFC)under grant numbers 61201368.
文摘Image fusion is a key technology in the field of digital image processing.In the present study,an effect-based pseudo color fusion model of infrared and visible images based on the rattlesnake vision imaging system(the rattlesnake bimodal cell fusion mechanism and the visual receptive field model)is proposed.The innovation point of the proposed model lies in the following three features:first,the introduction of a simple mathematical model of the visual receptive field reduce computational complexity;second,the enhanced image is obtained by extracting the common information and unique information of source images,which improves fusion image quality;and third,the Waxman typical fusion structure is improved for the pseudo color image fusion model.The performance of the image fusion model is verified through comparative experiments.In the subjective visual evaluation,we find that the color of the fusion image obtained through the proposed model is natural and can highlight the target and scene details.In the objective quantitative evaluation,we observe that the best values on the four indicators,namely standard deviation,average gradient,entropy,and spatial frequency,accounts for 90%,100%,90%,and 100%,respectively,indicating that the fusion image exhibits superior contrast,image clarity,information content,and overall activity.Experimental results reveal that the performance of the proposed model is superior to that of other models and thus verified the validity and reliability of the model.
文摘Most present research into facial expression recognition focuses on the visible spectrum, which is sen- sitive to illumination change. In this paper, we focus on in- tegrating thermal infrared data with visible spectrum images for spontaneous facial expression recognition. First, the ac- tive appearance model AAM parameters and three defined head motion features are extracted from visible spectrum im- ages, and several thermal statistical features are extracted from infrared (IR) images. Second, feature selection is per- formed using the F-test statistic. Third, Bayesian networks BNs and support vector machines SVMs are proposed for both decision-level and feature-level fusion. Experiments on the natural visible and infrared facial expression (NVIE) spontaneous database show the effectiveness of the proposed methods, and demonstrate thermal 1R images' supplementary role for visible facial expression recognition.