Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analy...Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production ...The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.展开更多
Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification...Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral...By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.展开更多
The vegetation growth status largely represents the ecosystem function and environmental quality.Hyperspectral remote sensing data can effectively eliminate the effects of surface spectral reflectance and atmospheric ...The vegetation growth status largely represents the ecosystem function and environmental quality.Hyperspectral remote sensing data can effectively eliminate the effects of surface spectral reflectance and atmospheric scattering and directly reflect the vegetation parameter information.In this study,the abandoned mining area in the Helan Mountains,China was taken as the study area.Based on hyperspectral remote sensing images of Zhuhai No.1 hyperspectral satellite,we used the pixel dichotomy model,which was constructed using the normalized difference vegetation index(NDVI),to estimate the vegetation coverage of the study area,and evaluated the vegetation growth status by five vegetation indices(NDVI,ratio vegetation index(RVI),photochemical vegetation index(PVI),red-green ratio index(RGI),and anthocyanin reflectance index 1(ARI1)).According to the results,the reclaimed vegetation growth status in the study area can be divided into four levels(unhealthy,low healthy,healthy,and very healthy).The overall vegetation growth status in the study area was generally at low healthy level,indicating that the vegetation growth status in the study area was not good due to short-time period restoration and harsh damaged environment such as high and steep rock slopes.Furthermore,the unhealthy areas were mainly located in Dawukougou where abandoned mines were concentrated,indicating that the original mining activities have had a large effect on vegetation ecology.After ecological restoration of abandoned mines,the vegetation coverage in the study area has increased to a certain extent,but the amplitude was not large.The situation of vegetation coverage in the northern part of the study area was worse than that in the southern part,due to abandoned mines mainly concentrating in the northern part of the Helan Mountains.The combination of hyperspectral remote sensing data and vegetation indices can comprehensively extract the characteristics of vegetation,accurately analyze the plant growth status,and provide technical support for vegetation health evaluation.展开更多
The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and chara...The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.展开更多
Mural paintings hold significant historical information and possess substantial artistic and cultural value.However,murals are inevitably damaged by natural environmental factors such as wind and sunlight,as well as b...Mural paintings hold significant historical information and possess substantial artistic and cultural value.However,murals are inevitably damaged by natural environmental factors such as wind and sunlight,as well as by human activities.For this reason,the study of damaged areas is crucial for mural restoration.These damaged regions differ significantly from undamaged areas and can be considered abnormal targets.Traditional manual visual processing lacks strong characterization capabilities and is prone to omissions and false detections.Hyperspectral imaging can reflect the material properties more effectively than visual characterization methods.Thus,this study employs hyperspectral imaging to obtain mural information and proposes a mural anomaly detection algorithm based on a hyperspectral multi-scale residual attention network(HM-MRANet).The innovations of this paper include:(1)Constructing mural painting hyperspectral datasets.(2)Proposing a multi-scale residual spectral-spatial feature extraction module based on a 3D CNN(Convolutional Neural Networks)network to better capture multiscale information and improve performance on small-sample hyperspectral datasets.(3)Proposing the Enhanced Residual Attention Module(ERAM)to address the feature redundancy problem,enhance the network’s feature discrimination ability,and further improve abnormal area detection accuracy.The experimental results show that the AUC(Area Under Curve),Specificity,and Accuracy of this paper’s algorithm reach 85.42%,88.84%,and 87.65%,respectively,on this dataset.These results represent improvements of 3.07%,1.11%and 2.68%compared to the SSRN algorithm,demonstrating the effectiveness of this method for mural anomaly detection.展开更多
Marine oil spill emulsions are difficult to recover,and the damage to the environment is not easy to eliminate.The use of remote sensing to accurately identify oil spill emulsions is highly important for the protectio...Marine oil spill emulsions are difficult to recover,and the damage to the environment is not easy to eliminate.The use of remote sensing to accurately identify oil spill emulsions is highly important for the protection of marine environments.However,the spectrum of oil emulsions changes due to different water content.Hyperspectral remote sensing and deep learning can use spectral and spatial information to identify different types of oil emulsions.Nonetheless,hyperspectral data can also cause information redundancy,reducing classification accuracy and efficiency,and even overfitting in machine learning models.To address these problems,an oil emulsion deep-learning identification model with spatial-spectral feature fusion is established,and feature bands that can distinguish between crude oil,seawater,water-in-oil emulsion(WO),and oil-in-water emulsion(OW)are filtered based on a standard deviation threshold–mutual information method.Using oil spill airborne hyperspectral data,we conducted identification experiments on oil emulsions in different background waters and under different spatial and temporal conditions,analyzed the transferability of the model,and explored the effects of feature band selection and spectral resolution on the identification of oil emulsions.The results show the following.(1)The standard deviation–mutual information feature selection method is able to effectively extract feature bands that can distinguish between WO,OW,oil slick,and seawater.The number of bands was reduced from 224 to 134 after feature selection on the Airborne Visible Infrared Imaging Spectrometer(AVIRIS)data and from 126 to 100 on the S185 data.(2)With feature selection,the overall accuracy and Kappa of the identification results for the training area are 91.80%and 0.86,respectively,improved by 2.62%and 0.04,and the overall accuracy and Kappa of the identification results for the migration area are 86.53%and 0.80,respectively,improved by 3.45%and 0.05.(3)The oil emulsion identification model has a certain degree of transferability and can effectively identify oil spill emulsions for AVIRIS data at different times and locations,with an overall accuracy of more than 80%,Kappa coefficient of more than 0.7,and F1 score of 0.75 or more for each category.(4)As the spectral resolution decreasing,the model yields different degrees of misclassification for areas with a mixed distribution of oil slick and seawater or mixed distribution of WO and OW.Based on the above experimental results,we demonstrate that the oil emulsion identification model with spatial–spectral feature fusion achieves a high accuracy rate in identifying oil emulsion using airborne hyperspectral data,and can be applied to images under different spatial and temporal conditions.Furthermore,we also elucidate the impact of factors such as spectral resolution and background water bodies on the identification process.These findings provide new reference for future endeavors in automated marine oil spill detection.展开更多
The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to reali...The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to realize the rapid calculation of data on aircraft or in orbit,which will improve the timeliness of oil spill emergency monitoring.At the same time,the combination of spectral and spatial features can improve the accuracy of oil spill monitoring.Two ground-based experiments were designed to collect measured airborne hyperspectral data of crude oil and its emulsions,for which the multiscale superpixel level group clustering framework(MSGCF)was used to select spectral feature bands with strong separability.In addition,the double-branch dual-attention(DBDA)model was applied to identify crude oil and its emulsions.Compared with the recognition results based on original hyperspectral images,using the feature bands determined by MSGCF improved the recognition accuracy,and greatly shortened the running time.Moreover,the characteristic bands for quantifying the volume concentration of water-in-oil emulsions were determined,and a quantitative inversion model was constructed and applied to the AVIRIS image of the deepwater horizon oil spill event in 2010.This study verified the effectiveness of feature bands in identifying oil spill pollution types and quantifying concentration,laying foundation for rapid identification and quantification of marine oil spills and their emulsions on aircraft or in orbit.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w...Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
Although airborne hyperspectral data with detailed spatial and spectral information has demonstrated significant potential for tree species classification,it has not been widely used over large areas.A comprehensive p...Although airborne hyperspectral data with detailed spatial and spectral information has demonstrated significant potential for tree species classification,it has not been widely used over large areas.A comprehensive process based on multi-flightline airborne hyperspectral data is lacking over large,forested areas influenced by both the effects of bidirectional reflectance distribution function(BRDF)and cloud shadow contamination.In this study,hyperspectral data were collected over the Mengjiagang Forest Farm in Northeast China in the summer of 2017 using the Chinese Academy of Forestry's LiDAR,CCD,and hyperspectral systems(CAF-LiCHy).After BRDF correction and cloud shadow detection processing,a tree species classification workflow was developed for sunlit and cloud-shaded forest areas with input features of minimum noise fraction reduced bands,spectral vegetation indices,and texture information.Results indicate that BRDF-corrected sunlit hyperspectral data can provide a stable and high classification accuracy based on representative training data.Cloud-shaded pixels also have good spectral separability for species classification.The red-edge spectral information and ratio-based spectral indices with high importance scores are recommended as input features for species classification under varying light conditions.According to the classification accuracies through field survey data at multiple spatial scales,it was found that species classification within an extensive forest area using airborne hyperspectral data under various illuminations can be successfully carried out using the effective radiometric consistency process and feature selection strategy.展开更多
Sanxingdui cultural relics are the precious cultural heritage of humanity with high values of history,science,culture,art and research.However,mainstream analytical methods are contacting and detrimental,which is unfa...Sanxingdui cultural relics are the precious cultural heritage of humanity with high values of history,science,culture,art and research.However,mainstream analytical methods are contacting and detrimental,which is unfavorable to the protection of cultural relics.This paper improves the accuracy of the extraction,location,and analysis of artifacts using hyperspectral methods.To improve the accuracy of cultural relic mining,positioning,and analysis,the segmentation algorithm of Sanxingdui cultural relics based on the spatial spectrum integrated network is proposed with the support of hyperspectral techniques.Firstly,region stitching algorithm based on the relative position of hyper spectrally collected data is proposed to improve stitching efficiency.Secondly,given the prominence of traditional HRNet(High-Resolution Net)models in high-resolution data processing,the spatial attention mechanism is put forward to obtain spatial dimension information.Thirdly,in view of the prominence of 3D networks in spectral information acquisition,the pyramid 3D residual network model is proposed to obtain internal spectral dimensional information.Fourthly,four kinds of fusion methods at the level of data and decision are presented to achieve cultural relic labeling.As shown by the experiment results,the proposed network adopts an integrated method of data-level and decision-level,which achieves the optimal average accuracy of identification 0.84,realizes shallow coverage of cultural relics labeling,and effectively supports the mining and protection of cultural relics.展开更多
Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well alo...Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.展开更多
Peach aphid is a common pest and hard to detect.This study employs hyperspectral imaging technology to identify early damage in green cabbage caused by peach aphid.Through principal component transformation and multip...Peach aphid is a common pest and hard to detect.This study employs hyperspectral imaging technology to identify early damage in green cabbage caused by peach aphid.Through principal component transformation and multiple linear regression analysis,the correlation relation between spectral characteristics and infestation stage is analyzed.Then,four characteristic wavelength selection methods are compared and optimal characteristic wavelengths subset is determined to be input for modelling.One linear algorithm and two nonlinear modelling algorithms are compared.Finally,support vector machine(SVM)model based on the characteristic wavelengths selected by multi-cluster feature selection(MCFS)acquires the highest identification accuracy,which is 98.97%.These results indicate that hyperspectral imaging technology have the ability to identify early peach aphid infestation stages on green cabbages.展开更多
Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expen...Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expensive and inefficient in practical tasks.Existing few‐shot learning(FSL)methods generally ignore the potential relationships between non‐local spatial samples that would better represent the underlying features of HSI.To solve the above issues,a novel deep transformer and few‐shot learning(DTFSL)classification framework is proposed,attempting to realize fine‐grained classification of HSI with only a few‐shot instances.Specifically,the spatial attention and spectral query modules are introduced to overcome the constraint of the convolution kernel and consider the information between long‐distance location(non‐local)samples to reduce the uncertainty of classes.Next,the network is trained with episodes and task‐based learning strategies to learn a metric space,which can continuously enhance its modelling capability.Furthermore,the developed approach combines the advantages of domain adaptation to reduce the variation in inter‐domain distribution and realize distribution alignment.On three publicly available HSI data,extensive experiments have indicated that the proposed DT‐FSL yields better results concerning state‐of‐the‐art algorithms.展开更多
基金Yulin Science and Technology Bureau production Project“Research on Smart Agricultural Product Traceability System”(No.CXY-2022-64)Light of West China(No.XAB2022YN10)+1 种基金The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Province Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
基金supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D.Program(PHD/0225/2561)the Faculty of Engineering,Kamphaeng Saen Campus,Kasetsart University,Thailand。
文摘The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.
基金National Natural Science Foundation of China(No.62201457)Natural Science Foundation of Shaanxi Province(Nos.2022JQ-668,2022JQ-588)。
文摘Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.
基金Ministry of Education,Culture,Sports,Science and Technology,Grant/Award Number:20K11867。
文摘By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.
基金This research was supported by the Ningxia Hui Autonomous Region Key Research and Development Plan(2022BEG03052).
文摘The vegetation growth status largely represents the ecosystem function and environmental quality.Hyperspectral remote sensing data can effectively eliminate the effects of surface spectral reflectance and atmospheric scattering and directly reflect the vegetation parameter information.In this study,the abandoned mining area in the Helan Mountains,China was taken as the study area.Based on hyperspectral remote sensing images of Zhuhai No.1 hyperspectral satellite,we used the pixel dichotomy model,which was constructed using the normalized difference vegetation index(NDVI),to estimate the vegetation coverage of the study area,and evaluated the vegetation growth status by five vegetation indices(NDVI,ratio vegetation index(RVI),photochemical vegetation index(PVI),red-green ratio index(RGI),and anthocyanin reflectance index 1(ARI1)).According to the results,the reclaimed vegetation growth status in the study area can be divided into four levels(unhealthy,low healthy,healthy,and very healthy).The overall vegetation growth status in the study area was generally at low healthy level,indicating that the vegetation growth status in the study area was not good due to short-time period restoration and harsh damaged environment such as high and steep rock slopes.Furthermore,the unhealthy areas were mainly located in Dawukougou where abandoned mines were concentrated,indicating that the original mining activities have had a large effect on vegetation ecology.After ecological restoration of abandoned mines,the vegetation coverage in the study area has increased to a certain extent,but the amplitude was not large.The situation of vegetation coverage in the northern part of the study area was worse than that in the southern part,due to abandoned mines mainly concentrating in the northern part of the Helan Mountains.The combination of hyperspectral remote sensing data and vegetation indices can comprehensively extract the characteristics of vegetation,accurately analyze the plant growth status,and provide technical support for vegetation health evaluation.
基金supported by the National Key R&D Program of China(Grant Nos.2021YFB3901403 and 2023YFC3007203).
文摘The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.
基金supported by Key Research and Development Plan of Ministry of Science and Technology(No.2023YFF0906200)Shaanxi Key Research and Development Plan(No.2018ZDXM-SF-093)+3 种基金Shaanxi Province Key Industrial Innovation Chain(Nos.S2022-YF-ZDCXL-ZDLGY-0093 and 2023-ZDLGY-45)Light of West China(No.XAB2022YN10)The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Mural paintings hold significant historical information and possess substantial artistic and cultural value.However,murals are inevitably damaged by natural environmental factors such as wind and sunlight,as well as by human activities.For this reason,the study of damaged areas is crucial for mural restoration.These damaged regions differ significantly from undamaged areas and can be considered abnormal targets.Traditional manual visual processing lacks strong characterization capabilities and is prone to omissions and false detections.Hyperspectral imaging can reflect the material properties more effectively than visual characterization methods.Thus,this study employs hyperspectral imaging to obtain mural information and proposes a mural anomaly detection algorithm based on a hyperspectral multi-scale residual attention network(HM-MRANet).The innovations of this paper include:(1)Constructing mural painting hyperspectral datasets.(2)Proposing a multi-scale residual spectral-spatial feature extraction module based on a 3D CNN(Convolutional Neural Networks)network to better capture multiscale information and improve performance on small-sample hyperspectral datasets.(3)Proposing the Enhanced Residual Attention Module(ERAM)to address the feature redundancy problem,enhance the network’s feature discrimination ability,and further improve abnormal area detection accuracy.The experimental results show that the AUC(Area Under Curve),Specificity,and Accuracy of this paper’s algorithm reach 85.42%,88.84%,and 87.65%,respectively,on this dataset.These results represent improvements of 3.07%,1.11%and 2.68%compared to the SSRN algorithm,demonstrating the effectiveness of this method for mural anomaly detection.
基金The National Natural Science Foundation of China under contract Nos 61890964 and 42206177the Joint Funds of the National Natural Science Foundation of China under contract No.U1906217.
文摘Marine oil spill emulsions are difficult to recover,and the damage to the environment is not easy to eliminate.The use of remote sensing to accurately identify oil spill emulsions is highly important for the protection of marine environments.However,the spectrum of oil emulsions changes due to different water content.Hyperspectral remote sensing and deep learning can use spectral and spatial information to identify different types of oil emulsions.Nonetheless,hyperspectral data can also cause information redundancy,reducing classification accuracy and efficiency,and even overfitting in machine learning models.To address these problems,an oil emulsion deep-learning identification model with spatial-spectral feature fusion is established,and feature bands that can distinguish between crude oil,seawater,water-in-oil emulsion(WO),and oil-in-water emulsion(OW)are filtered based on a standard deviation threshold–mutual information method.Using oil spill airborne hyperspectral data,we conducted identification experiments on oil emulsions in different background waters and under different spatial and temporal conditions,analyzed the transferability of the model,and explored the effects of feature band selection and spectral resolution on the identification of oil emulsions.The results show the following.(1)The standard deviation–mutual information feature selection method is able to effectively extract feature bands that can distinguish between WO,OW,oil slick,and seawater.The number of bands was reduced from 224 to 134 after feature selection on the Airborne Visible Infrared Imaging Spectrometer(AVIRIS)data and from 126 to 100 on the S185 data.(2)With feature selection,the overall accuracy and Kappa of the identification results for the training area are 91.80%and 0.86,respectively,improved by 2.62%and 0.04,and the overall accuracy and Kappa of the identification results for the migration area are 86.53%and 0.80,respectively,improved by 3.45%and 0.05.(3)The oil emulsion identification model has a certain degree of transferability and can effectively identify oil spill emulsions for AVIRIS data at different times and locations,with an overall accuracy of more than 80%,Kappa coefficient of more than 0.7,and F1 score of 0.75 or more for each category.(4)As the spectral resolution decreasing,the model yields different degrees of misclassification for areas with a mixed distribution of oil slick and seawater or mixed distribution of WO and OW.Based on the above experimental results,we demonstrate that the oil emulsion identification model with spatial–spectral feature fusion achieves a high accuracy rate in identifying oil emulsion using airborne hyperspectral data,and can be applied to images under different spatial and temporal conditions.Furthermore,we also elucidate the impact of factors such as spectral resolution and background water bodies on the identification process.These findings provide new reference for future endeavors in automated marine oil spill detection.
基金Supported by the National Natural Science Foundation of China(Nos.42206177,U1906217)the Shandong Provincial Natural Science Foundation(No.ZR2022QD075)the Fundamental Research Funds for the Central Universities(No.21CX06057A)。
文摘The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to realize the rapid calculation of data on aircraft or in orbit,which will improve the timeliness of oil spill emergency monitoring.At the same time,the combination of spectral and spatial features can improve the accuracy of oil spill monitoring.Two ground-based experiments were designed to collect measured airborne hyperspectral data of crude oil and its emulsions,for which the multiscale superpixel level group clustering framework(MSGCF)was used to select spectral feature bands with strong separability.In addition,the double-branch dual-attention(DBDA)model was applied to identify crude oil and its emulsions.Compared with the recognition results based on original hyperspectral images,using the feature bands determined by MSGCF improved the recognition accuracy,and greatly shortened the running time.Moreover,the characteristic bands for quantifying the volume concentration of water-in-oil emulsions were determined,and a quantitative inversion model was constructed and applied to the AVIRIS image of the deepwater horizon oil spill event in 2010.This study verified the effectiveness of feature bands in identifying oil spill pollution types and quantifying concentration,laying foundation for rapid identification and quantification of marine oil spills and their emulsions on aircraft or in orbit.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
基金supported in part by the National Natural Science Foundation of China(62276192)。
文摘Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金supported by the National Natural Science Foundation of China (Grant No.42101403)the National Key Researchand Development Program of China (Grant No.2017YFD0600404)。
文摘Although airborne hyperspectral data with detailed spatial and spectral information has demonstrated significant potential for tree species classification,it has not been widely used over large areas.A comprehensive process based on multi-flightline airborne hyperspectral data is lacking over large,forested areas influenced by both the effects of bidirectional reflectance distribution function(BRDF)and cloud shadow contamination.In this study,hyperspectral data were collected over the Mengjiagang Forest Farm in Northeast China in the summer of 2017 using the Chinese Academy of Forestry's LiDAR,CCD,and hyperspectral systems(CAF-LiCHy).After BRDF correction and cloud shadow detection processing,a tree species classification workflow was developed for sunlit and cloud-shaded forest areas with input features of minimum noise fraction reduced bands,spectral vegetation indices,and texture information.Results indicate that BRDF-corrected sunlit hyperspectral data can provide a stable and high classification accuracy based on representative training data.Cloud-shaded pixels also have good spectral separability for species classification.The red-edge spectral information and ratio-based spectral indices with high importance scores are recommended as input features for species classification under varying light conditions.According to the classification accuracies through field survey data at multiple spatial scales,it was found that species classification within an extensive forest area using airborne hyperspectral data under various illuminations can be successfully carried out using the effective radiometric consistency process and feature selection strategy.
基金supported by Light of West China(No.XAB2022YN10)Shaanxi Key Rsearch and Development Plan(No.2018ZDXM-SF-093)Shaanxi Province Key Industrial Innovation Chain(Nos.S2022-YF-ZDCXL-ZDLGY-0093,2023-ZDLGY-45).
文摘Sanxingdui cultural relics are the precious cultural heritage of humanity with high values of history,science,culture,art and research.However,mainstream analytical methods are contacting and detrimental,which is unfavorable to the protection of cultural relics.This paper improves the accuracy of the extraction,location,and analysis of artifacts using hyperspectral methods.To improve the accuracy of cultural relic mining,positioning,and analysis,the segmentation algorithm of Sanxingdui cultural relics based on the spatial spectrum integrated network is proposed with the support of hyperspectral techniques.Firstly,region stitching algorithm based on the relative position of hyper spectrally collected data is proposed to improve stitching efficiency.Secondly,given the prominence of traditional HRNet(High-Resolution Net)models in high-resolution data processing,the spatial attention mechanism is put forward to obtain spatial dimension information.Thirdly,in view of the prominence of 3D networks in spectral information acquisition,the pyramid 3D residual network model is proposed to obtain internal spectral dimensional information.Fourthly,four kinds of fusion methods at the level of data and decision are presented to achieve cultural relic labeling.As shown by the experiment results,the proposed network adopts an integrated method of data-level and decision-level,which achieves the optimal average accuracy of identification 0.84,realizes shallow coverage of cultural relics labeling,and effectively supports the mining and protection of cultural relics.
基金supported in part by the National Natural Science Foundation of China under Grant 62076199in part by the Open Research Fund of Beijing Key Laboratory of Big Data Technology for Food Safety under Grant BTBD‐2020KF08Beijing Technology and Business University,and in part by the Key R&D project of Shaan'xi Province under Grant 2021GY‐027 and 2022ZDLGY01‐03.
文摘Recently,the autoencoder(AE)based method plays a critical role in the hyperspectral anomaly detection domain.However,due to the strong generalised capacity of AE,the abnormal samples are usually reconstructed well along with the normal background samples.Thus,in order to separate anomalies from the background by calculating reconstruction errors,it can be greatly beneficial to reduce the AE capability for abnormal sample reconstruction while maintaining the background reconstruction performance.A memory‐augmented autoencoder for hyperspectral anomaly detection(MAENet)is proposed to address this challenging problem.Specifically,the proposed MAENet mainly consists of an encoder,a memory module,and a decoder.First,the encoder transforms the original hyperspectral data into the low‐dimensional latent representation.Then,the latent representation is utilised to retrieve the most relevant matrix items in the memory matrix,and the retrieved matrix items will be used to replace the latent representation from the encoder.Finally,the decoder is used to reconstruct the input hyperspectral data using the retrieved memory items.With this strategy,the background can still be reconstructed well while the abnormal samples cannot.Experiments conducted on five real hyperspectral anomaly data sets demonstrate the superiority of the proposed method.
基金supported by China National Key Research and Development Program(No.2016YFD0700304)Shandong Natural Science Foundation Youth Program(No.ZR2021QC216)Agricultural Scientific and Technological Innovation Project of Shandong Academy of Agricultural Science(No.CXGC2023A34)。
文摘Peach aphid is a common pest and hard to detect.This study employs hyperspectral imaging technology to identify early damage in green cabbage caused by peach aphid.Through principal component transformation and multiple linear regression analysis,the correlation relation between spectral characteristics and infestation stage is analyzed.Then,four characteristic wavelength selection methods are compared and optimal characteristic wavelengths subset is determined to be input for modelling.One linear algorithm and two nonlinear modelling algorithms are compared.Finally,support vector machine(SVM)model based on the characteristic wavelengths selected by multi-cluster feature selection(MCFS)acquires the highest identification accuracy,which is 98.97%.These results indicate that hyperspectral imaging technology have the ability to identify early peach aphid infestation stages on green cabbages.
基金supported by the National Natural Science Foundation of China under Grant 62161160336 and Grant 42030111.
文摘Recently,deep learning has achieved considerable results in the hyperspectral image(HSI)classification.However,most available deep networks require ample and authentic samples to better train the models,which is expensive and inefficient in practical tasks.Existing few‐shot learning(FSL)methods generally ignore the potential relationships between non‐local spatial samples that would better represent the underlying features of HSI.To solve the above issues,a novel deep transformer and few‐shot learning(DTFSL)classification framework is proposed,attempting to realize fine‐grained classification of HSI with only a few‐shot instances.Specifically,the spatial attention and spectral query modules are introduced to overcome the constraint of the convolution kernel and consider the information between long‐distance location(non‐local)samples to reduce the uncertainty of classes.Next,the network is trained with episodes and task‐based learning strategies to learn a metric space,which can continuously enhance its modelling capability.Furthermore,the developed approach combines the advantages of domain adaptation to reduce the variation in inter‐domain distribution and realize distribution alignment.On three publicly available HSI data,extensive experiments have indicated that the proposed DT‐FSL yields better results concerning state‐of‐the‐art algorithms.