Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems,...Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.展开更多
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities...An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).展开更多
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key ...Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.展开更多
Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually imp...Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.展开更多
Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solu...Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solution for detection and monitoring.Unmanned aerial vehicles(UAVs)have recently emerged as a tool for algal bloom detection,efficiently providing on-demand images at high spatiotemporal resolutions.This study developed an image processing method for algal bloom area estimation from the aerial images(obtained from the internet)captured using UAVs.As a remote sensing method of HAB detection,analysis,and monitoring,a combination of histogram and texture analyses was used to efficiently estimate the area of HABs.Statistical features like entropy(using the Kullback-Leibler method)were emphasized with the aid of a gray-level co-occurrence matrix.The results showed that the orthogonal images demonstrated fewer errors,and the morphological filter best detected algal blooms in real time,with a precision of 80%.This study provided efficient image processing approaches using on-board UAVs for HAB monitoring.展开更多
China’s domestic animation industry is deeply rooted in its rich traditional cultural heritage.While continuously exploring and showcasing the unique charm of such cultural heritage through storytelling,imagery estab...China’s domestic animation industry is deeply rooted in its rich traditional cultural heritage.While continuously exploring and showcasing the unique charm of such cultural heritage through storytelling,imagery establishment,and spirit creation,Chinese animation also seamlessly integrates modern aesthetic characteristics and cultural values into its development within the context of the new era.By adopting a contemporary perspective,it innovatively expresses the essence of traditional culture while striving to shape and present a reliable,admirable and respectable image of China.Chinese animation aims to create credible contemporary animation images by drawing inspiration from traditional Chinese cultural archetypes.It also seeks to revitalize admirable traditional cultural imagery using vibrant and prevailing ACGN visuals,while shaping credible characters with richer cultural connotations and crafting stories with more enchanting plots to convey the spiritual essence of the Chinese nation.By establishing reliable,admirable and respectable animation images,the Chinese animation industry strives to enhance its capability to“tell China’s stories well and make the voice of China heard,”to better promote the image of China in the new era from a global perspective.展开更多
Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selec...Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.展开更多
As one of the carriers for human communication and interaction, images are prone to contamination by noise during transmission and reception, which is often uncontrollable and unknown. Therefore, how to denoise images...As one of the carriers for human communication and interaction, images are prone to contamination by noise during transmission and reception, which is often uncontrollable and unknown. Therefore, how to denoise images contaminated by unknown noise has gradually become one of the research focuses. In order to achieve blind denoising and separation to restore images, this paper proposes a method for image processing based on Root Mean Square Error (RMSE) by integrating multiple filtering methods for denoising. This method includes Wavelet Filtering, Gaussian Filtering, Median Filtering, Mean Filtering, Bilateral Filtering, Adaptive Bandpass Filtering, Non-local Means Filtering and Regularization Denoising suitable for different types of noise. We can apply this method to denoise images contaminated by blind noise sources and evaluate the denoising effects using RMSE. The smaller the RMSE, the better the denoising effect. The optimal denoising result is selected through comprehensively comparing the RMSE values of all methods. Experimental results demonstrate that the proposed method effectively denoises and restores images contaminated by blind noise sources.展开更多
Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of int...Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.展开更多
The accurate and rapid estimation of canopy nitrogen content(CNC)in crops is the key to optimizing in-season nitrogen fertilizer application in precision agriculture.However,the determination of CNC from field samplin...The accurate and rapid estimation of canopy nitrogen content(CNC)in crops is the key to optimizing in-season nitrogen fertilizer application in precision agriculture.However,the determination of CNC from field sampling data for leaf area index(LAI),canopy photosynthetic pigments(CPP;including chlorophyll a,chlorophyll b and carotenoids)and leaf nitrogen concentration(LNC)can be time-consuming and costly.Here we evaluated the use of high-precision unmanned aerial vehicle(UAV)multispectral imagery for estimating the LAI,CPP and CNC of winter wheat over the whole growth period.A total of 23 spectral features(SFs;five original spectrum bands,17 vegetation indices and the gray scale of the RGB image)and eight texture features(TFs;contrast,entropy,variance,mean,homogeneity,dissimilarity,second moment,and correlation)were selected as inputs for the models.Six machine learning methods,i.e.,multiple stepwise regression(MSR),support vector regression(SVR),gradient boosting decision tree(GBDT),Gaussian process regression(GPR),back propagation neural network(BPNN)and radial basis function neural network(RBFNN),were compared for the retrieval of winter wheat LAI,CPP and CNC values,and a double-layer model was proposed for estimating CNC based on LAI and CPP.The results showed that the inversion of winter wheat LAI,CPP and CNC by the combination of SFs+TFs greatly improved the estimation accuracy compared with that by using only the SFs.The RBFNN and BPNN models outperformed the other machine learning models in estimating winter wheat LAI,CPP and CNC.The proposed double-layer models(R^(2)=0.67-0.89,RMSE=13.63-23.71 mg g^(-1),MAE=10.75-17.59 mg g^(-1))performed better than the direct inversion models(R^(2)=0.61-0.80,RMSE=18.01-25.12 mg g^(-1),MAE=12.96-18.88 mg g^(-1))in estimating winter wheat CNC.The best winter wheat CNC accuracy was obtained by the double-layer RBFNN model with SFs+TFs as inputs(R^(2)=0.89,RMSE=13.63 mg g^(-1),MAE=10.75 mg g^(-1)).The results of this study can provide guidance for the accurate and rapid determination of winter wheat canopy nitrogen content in the field.展开更多
Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neu...Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.展开更多
Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafte...Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.展开更多
Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological i...Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph...Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.展开更多
The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese...The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
基金support by the National Natural Science Foundation of China (Grant No. 62005049)Natural Science Foundation of Fujian Province (Grant Nos. 2020J01451, 2022J05113)Education and Scientific Research Program for Young and Middleaged Teachers in Fujian Province (Grant No. JAT210035)。
文摘Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
基金This study is partially supported by the National Natural Science Foundation of China(NSFC)(62005120,62125504).
文摘An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金supported by the National Natural Science Foundation of China(No.12373073,U2031104,No.12173015)Guangdong Basic and Applied Basic Research Foundation(No.2023A1515011340)。
文摘Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.
基金This research was funded by the Natural Science Foundation of Gansu Province with Approval Numbers 20JR10RA334 and 21JR7RA570Funding is provided for the 2021 Longyuan Youth Innovation and Entrepreneurship Talent Project with Approval Number 2021LQGR20+1 种基金the University Level Innovation Project with Approval NumbersGZF2020XZD18jbzxyb2018-01 of Gansu University of Political Science and Law.
文摘Image description task is the intersection of computer vision and natural language processing,and it has important prospects,including helping computers understand images and obtaining information for the visually impaired.This study presents an innovative approach employing deep reinforcement learning to enhance the accuracy of natural language descriptions of images.Our method focuses on refining the reward function in deep reinforcement learning,facilitating the generation of precise descriptions by aligning visual and textual features more closely.Our approach comprises three key architectures.Firstly,it utilizes Residual Network 101(ResNet-101)and Faster Region-based Convolutional Neural Network(Faster R-CNN)to extract average and local image features,respectively,followed by the implementation of a dual attention mechanism for intricate feature fusion.Secondly,the Transformer model is engaged to derive contextual semantic features from textual data.Finally,the generation of descriptive text is executed through a two-layer long short-term memory network(LSTM),directed by the value and reward functions.Compared with the image description method that relies on deep learning,the score of Bilingual Evaluation Understudy(BLEU-1)is 0.762,which is 1.6%higher,and the score of BLEU-4 is 0.299.Consensus-based Image Description Evaluation(CIDEr)scored 0.998,Recall-Oriented Understudy for Gisting Evaluation(ROUGE)scored 0.552,the latter improved by 0.36%.These results not only attest to the viability of our approach but also highlight its superiority in the realm of image description.Future research can explore the integration of our method with other artificial intelligence(AI)domains,such as emotional AI,to create more nuanced and context-aware systems.
文摘Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solution for detection and monitoring.Unmanned aerial vehicles(UAVs)have recently emerged as a tool for algal bloom detection,efficiently providing on-demand images at high spatiotemporal resolutions.This study developed an image processing method for algal bloom area estimation from the aerial images(obtained from the internet)captured using UAVs.As a remote sensing method of HAB detection,analysis,and monitoring,a combination of histogram and texture analyses was used to efficiently estimate the area of HABs.Statistical features like entropy(using the Kullback-Leibler method)were emphasized with the aid of a gray-level co-occurrence matrix.The results showed that the orthogonal images demonstrated fewer errors,and the morphological filter best detected algal blooms in real time,with a precision of 80%.This study provided efficient image processing approaches using on-board UAVs for HAB monitoring.
基金funded by “Multi-dimensional Reconstruction and Symbol Hybridity:Research on Innovative Expression of Traditional Culture in Domestic Animated Films”(WHCY2023B28)a project funded by the Cultural Industry Development Research Center of Sichuan Provincial Key Research Base of Philosophy and Social Sciences+1 种基金funded by “An Intertextual Analysis of Li Bing’s Story from a Cross-media Perspective”(RX2300002875)a research project funded by the Li Bing Research Center at the Sichuan Provincial Key Research Base of Philosophy and Social Sciences
文摘China’s domestic animation industry is deeply rooted in its rich traditional cultural heritage.While continuously exploring and showcasing the unique charm of such cultural heritage through storytelling,imagery establishment,and spirit creation,Chinese animation also seamlessly integrates modern aesthetic characteristics and cultural values into its development within the context of the new era.By adopting a contemporary perspective,it innovatively expresses the essence of traditional culture while striving to shape and present a reliable,admirable and respectable image of China.Chinese animation aims to create credible contemporary animation images by drawing inspiration from traditional Chinese cultural archetypes.It also seeks to revitalize admirable traditional cultural imagery using vibrant and prevailing ACGN visuals,while shaping credible characters with richer cultural connotations and crafting stories with more enchanting plots to convey the spiritual essence of the Chinese nation.By establishing reliable,admirable and respectable animation images,the Chinese animation industry strives to enhance its capability to“tell China’s stories well and make the voice of China heard,”to better promote the image of China in the new era from a global perspective.
基金supported by the Scientific Research and Innovation Team of Hebei University(IT2023B07)the Natural Science Foundation of Hebei Province(F2023201069)the Postgraduate’s Innovation Fund Project of Hebei University(HBU2024BS021).
文摘Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.
文摘As one of the carriers for human communication and interaction, images are prone to contamination by noise during transmission and reception, which is often uncontrollable and unknown. Therefore, how to denoise images contaminated by unknown noise has gradually become one of the research focuses. In order to achieve blind denoising and separation to restore images, this paper proposes a method for image processing based on Root Mean Square Error (RMSE) by integrating multiple filtering methods for denoising. This method includes Wavelet Filtering, Gaussian Filtering, Median Filtering, Mean Filtering, Bilateral Filtering, Adaptive Bandpass Filtering, Non-local Means Filtering and Regularization Denoising suitable for different types of noise. We can apply this method to denoise images contaminated by blind noise sources and evaluate the denoising effects using RMSE. The smaller the RMSE, the better the denoising effect. The optimal denoising result is selected through comprehensively comparing the RMSE values of all methods. Experimental results demonstrate that the proposed method effectively denoises and restores images contaminated by blind noise sources.
基金the National Natural Science Foundation of China(No.11803036)Climbing Program of Changchun University(No.ZKP202114).
文摘Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.
基金funded by the Key Research and Development Program of Shaanxi Province of China(2022NY-063)the Chinese Universities Scientific Fund(2452020018).
文摘The accurate and rapid estimation of canopy nitrogen content(CNC)in crops is the key to optimizing in-season nitrogen fertilizer application in precision agriculture.However,the determination of CNC from field sampling data for leaf area index(LAI),canopy photosynthetic pigments(CPP;including chlorophyll a,chlorophyll b and carotenoids)and leaf nitrogen concentration(LNC)can be time-consuming and costly.Here we evaluated the use of high-precision unmanned aerial vehicle(UAV)multispectral imagery for estimating the LAI,CPP and CNC of winter wheat over the whole growth period.A total of 23 spectral features(SFs;five original spectrum bands,17 vegetation indices and the gray scale of the RGB image)and eight texture features(TFs;contrast,entropy,variance,mean,homogeneity,dissimilarity,second moment,and correlation)were selected as inputs for the models.Six machine learning methods,i.e.,multiple stepwise regression(MSR),support vector regression(SVR),gradient boosting decision tree(GBDT),Gaussian process regression(GPR),back propagation neural network(BPNN)and radial basis function neural network(RBFNN),were compared for the retrieval of winter wheat LAI,CPP and CNC values,and a double-layer model was proposed for estimating CNC based on LAI and CPP.The results showed that the inversion of winter wheat LAI,CPP and CNC by the combination of SFs+TFs greatly improved the estimation accuracy compared with that by using only the SFs.The RBFNN and BPNN models outperformed the other machine learning models in estimating winter wheat LAI,CPP and CNC.The proposed double-layer models(R^(2)=0.67-0.89,RMSE=13.63-23.71 mg g^(-1),MAE=10.75-17.59 mg g^(-1))performed better than the direct inversion models(R^(2)=0.61-0.80,RMSE=18.01-25.12 mg g^(-1),MAE=12.96-18.88 mg g^(-1))in estimating winter wheat CNC.The best winter wheat CNC accuracy was obtained by the double-layer RBFNN model with SFs+TFs as inputs(R^(2)=0.89,RMSE=13.63 mg g^(-1),MAE=10.75 mg g^(-1)).The results of this study can provide guidance for the accurate and rapid determination of winter wheat canopy nitrogen content in the field.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation(JPMJFS2115)。
文摘Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.
文摘Feature extraction is the most critical step in classification of multispectral image.The classification accuracy is mainly influenced by the feature sets that are selected to classify the image.In the past,handcrafted feature sets are used which are not adaptive for different image domains.To overcome this,an evolu-tionary learning method is developed to automatically learn the spatial-spectral features for classification.A modified Firefly Algorithm(FA)which achieves maximum classification accuracy with reduced size of feature set is proposed to gain the interest of feature selection for this purpose.For extracting the most effi-cient features from the data set,we have used 3-D discrete wavelet transform which decompose the multispectral image in all three dimensions.For selecting spatial and spectral features we have studied three different approaches namely overlapping window(OW-3DFS),non-overlapping window(NW-3DFS)adaptive window cube(AW-3DFS)and Pixel based technique.Fivefold Multiclass Support Vector Machine(MSVM)is used for classification purpose.Experiments con-ducted on Madurai LISS IV multispectral image exploited that the adaptive win-dow approach is used to increase the classification accuracy.
基金Supporting this research through Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金supported by the National Natural Science Foundation of China(Grant Nos.42322408,42188101,41974211,and 42074202)the Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.QYZDJ-SSW-JSC028)+1 种基金the Strategic Priority Program on Space Science,Chinese Academy of Sciences(Grant Nos.XDA15052500,XDA15350201,and XDA15014800)supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.Y202045)。
文摘Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.
基金funding and support from the United Kingdom Space Agency(UKSA)the European Space Agency(ESA)+5 种基金funded and supported through the ESA PRODEX schemefunded through PRODEX PEA 4000123238the Research Council of Norway grant 223252funded by Spanish MCIN/AEI/10.13039/501100011033 grant PID2019-107061GB-C61funding and support from the Chinese Academy of Sciences(CAS)funding and support from the National Aeronautics and Space Administration(NASA)。
文摘The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.