Hyperspectral imaging instruments could capture detailed spatial information and rich spectral signs of observed scenes.Much spatial information and spectral signatures of hyperspectral images(HSIs)present greater pot...Hyperspectral imaging instruments could capture detailed spatial information and rich spectral signs of observed scenes.Much spatial information and spectral signatures of hyperspectral images(HSIs)present greater potential for detecting and classifying fine crops.The accurate classification of crop kinds utilizing hyperspectral remote sensing imaging(RSI)has become an indispensable application in the agricultural domain.It is significant for the prediction and growth monitoring of crop yields.Amongst the deep learning(DL)techniques,Convolution Neural Network(CNN)was the best method for classifying HSI for their incredible local contextual modeling ability,enabling spectral and spatial feature extraction.This article designs a Hybrid Multi-Strategy Aquila Optimization with a Deep Learning-Driven Crop Type Classification(HMAODL-CTC)algorithm onHSI.The proposed HMAODL-CTC model mainly intends to categorize different types of crops on HSI.To accomplish this,the presented HMAODL-CTC model initially carries out image preprocessing to improve image quality.In addition,the presented HMAODL-CTC model develops dilated convolutional neural network(CNN)for feature extraction.For hyperparameter tuning of the dilated CNN model,the HMAO algorithm is utilized.Eventually,the presented HMAODL-CTC model uses an extreme learning machine(ELM)model for crop type classification.A comprehensive set of simulations were performed to illustrate the enhanced performance of the presented HMAODL-CTC algorithm.Extensive comparison studies reported the improved performance of the presented HMAODL-CTC algorithm over other compared methods.展开更多
Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is b...Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is better at representing the structure of hyperspectral data.This paper proposes a deep learning model called Topology and semantic information fusion classification network(TSFnet)that incorporates a topology structure and semantic information transmis-sion network to accurately classify traditional Chinese medicine in hyperspectral images.TSFnet uses a convolutional neural network(CNN)to extract features and a graph convolution network(GCN)to capture potential topological relationships among different types of Chinese herbal medicines.The results show that TSFnet outperforms other state-of-the-art deep learning classification algorithms in two different scenarios of herbal medicine datasets.Additionally,the proposed TSFnet model is lightweight and can be easily deployed for mobile herbal medicine classification.展开更多
Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixi...Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.展开更多
Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information...Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information of each pixel in the third dimension.The classification accuracy of hyperspectral images(HSI)increases significantly by employing both spatial and spectral features.For this work,the data was acquired using an airborne hyperspectral imager system which collected HSI in the visible and near-infrared(VNIR)range of 400 to 1000 nm wavelength within 180 spectral bands.The dataset is collected for nine different crops on agricultural land with a spectral resolution of 3.3 nm wavelength for each pixel.The data was cleaned from geometric distortions and stored with the class labels and annotations of global localization using the inertial navigation system.In this study,a unique pixel-based approach was designed to improve the crops'classification accuracy by using the edge-preserving features(EPF)and principal component analysis(PCA)in conjunction.The preliminary processing generated the high-dimensional EPF stack by applying the edge-preserving filters on acquired HSI.In the second step,this high dimensional stack was treated with the PCA for dimensionality reduction without losing significant spectral information.The resultant feature space(PCA-EPF)demonstrated enhanced class separability for improved crop classification with reduced dimensionality and computational cost.The support vector machines classifier was employed for multiclass classification of target crops using PCA-EPF.The classification performance evaluation was measured in terms of individual class accuracy,overall accuracy,average accuracy,and Cohen kappa factor.The proposed scheme achieved greater than 90%results for all the performance evaluation metrics.The PCA-EPF proved to be an effective attribute for crop classification using hyperspectral imaging in the VNIR range.The proposed scheme is well-suited for practical applications of crops and landfill estimations using agricultural remote sensing methods.展开更多
Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
The superpixel segmentation has been widely applied in many computer vision and image process applications.In recent years,amount of superpixel segmentation algorithms have been proposed.However,most of the current al...The superpixel segmentation has been widely applied in many computer vision and image process applications.In recent years,amount of superpixel segmentation algorithms have been proposed.However,most of the current algorithms are designed for natural images with little noise corrupted.In order to apply the superpixel algorithms to hyperspectral images which are always seriously polluted by noise,we propose a noiseresistant superpixel segmentation(NRSS)algorithm in this paper.In the proposed NRSS,the spectral signatures are first transformed into frequency domain to enhance the noise robustness;then the two widely spectral similarity measures-spectral angle mapper(SAM)and spectral information divergence(SID)are combined to enhance the discriminability of the spectral similarity;finally,the superpixels are generated with the proposed frequency-based spectral similarity.Both qualitative and quantitative experimental results demonstrate the effectiveness of the proposed superpixel segmentation algorithm when dealing with hyperspectral images with various noise levels.Moreover,the proposed NRSS is compared with the most widely used superpixel segmentation algorithm-simple linear iterative clustering(SLIC),where the comparison results prove the superiority of the proposed superpixel segmentation algorithm.展开更多
A crucial task in hyperspectral image(HSI)taxonomy is exploring effective methodologies to effusively practice the 3-D and spectral data delivered by the statistics cube.For classification of images,3-D data is adjudg...A crucial task in hyperspectral image(HSI)taxonomy is exploring effective methodologies to effusively practice the 3-D and spectral data delivered by the statistics cube.For classification of images,3-D data is adjudged in the phases of pre-cataloging,an assortment of a sample,classifiers,post-cataloging,and accurateness estimation.Lastly,a viewpoint on imminent examination directions for proceeding 3-D and spectral approaches is untaken.In topical years,sparse representation is acknowledged as a dominant classification tool to effectually labels deviating difficulties and extensively exploited in several imagery dispensation errands.Encouraged by those efficacious solicitations,sparse representation(SR)has likewise been presented to categorize HSI’s and validated virtuous enactment.This research paper offers an overview of the literature on the classification of HSI technology and its applications.This assessment is centered on a methodical review of SR and support vector machine(SVM)grounded HSI taxonomy works and equates numerous approaches for this matter.We form an outline that splits the equivalent mechanisms into spectral aspects of systems,and spectral–spatial feature networks to methodically analyze the contemporary accomplishments in HSI taxonomy.Furthermore,cogitating the datum that accessible training illustrations in the remote distinguishing arena are generally appropriate restricted besides training neural networks(NNs)to necessitate an enormous integer of illustrations,we comprise certain approaches to increase taxonomy enactment,which can deliver certain strategies for imminent learnings on this issue.Lastly,numerous illustrative neural learning-centered taxonomy approaches are piloted on physical HSI’s in our experimentations.展开更多
A distinguishing characteristic of normal and cancer cells is the difference in their nuclear chromatin content and distribution.This difference can be revealed by the transmission spectra of nuclei stained with a pH-...A distinguishing characteristic of normal and cancer cells is the difference in their nuclear chromatin content and distribution.This difference can be revealed by the transmission spectra of nuclei stained with a pH-sensitive stain.Here,we used hematoxylin-eosin(HE)to stain hepatic carcinoma tissues and obtained spectral-spatial data from their nuclei using hyper-spectral microscopy.The transmission spectra of the nuclei were then used to train a support vector machine(SVM)model for cell classification.Especially,we found that the chromatin distribution in cancer cells is more uniform,because of which the correlation coefficients for the spectra at different points in their nuclei are higher.Consequently,we exploited this feature to improve the SVM model.The sensitivity and specificity for the identification of cancer cells could be increased to 99%and 98%,respectively.We also designed an image-processing method for the extraction of information from cell nuclei to automate the identification process.展开更多
Spectral unmixing is essential for exploitation of remotely senseddata of Hyperspectral Images (HSI). It amounts to the identification of a position of spectral signatures that are pure and therefore called end member...Spectral unmixing is essential for exploitation of remotely senseddata of Hyperspectral Images (HSI). It amounts to the identification of a position of spectral signatures that are pure and therefore called end members andtheir matching fractional, draft rules abundances for every pixel in HSI. Thispaper aims to unmix hyperspectral data using the minimal volume methodof elementary scrutiny. Moreover, the problem of optimization is solved bythe implementation of the sequence of small problems that are constrainedquadratically. The hard constraint in the final step for the abundance fractionis then replaced with a loss function of hinge type that accounts for outlinersand noise. Existing algorithms focus on estimating the endmembers (Ems)enumeration in a sight, discerning of spectral signs of EMs, besides assessmentof fractional profusion for every EM in every pixel of a sight. Nevertheless, allthe stages are performed by only a few algorithms in the process of hyperspectral unmixing. Therefore, the Non-negative Minimum Volume Factorization(NMVF) algorithm is further extended by fusing it with the nonnegativematrix of robust collaborative factorization that aims to perform all the threeunmixing chain steps for hyperspectral images. The major contributions ofthis article are in this manner: (A) it performs Simplex analysis of minimum volume for hyperspectral images with unsupervised linear unmixing isemployed. (B) The simplex analysis method is configured with an exaggeratedform of the elementary which is delivered by vertical component analysis(VCA). (C) The inflating factor is chosen carefully inactivating the constraintsin a large majority for relating to the source fractions abundance that speedsup the algorithm. (D) The final step is making simplex analysis method robustto outliners as well as noise that replaces the profusion element positivity hardrestraint by a hinge kind soft restraint, preserving the local minima havinggood quality. (E) The matrix factorization method is applied that is capable ofperforming the three major phases of the hyperspectral separation sequence.The anticipated approach can find application in a scenario where the endmembers are known in advance, however, it assumes that the endmemberscount is corresponding to an overestimated value. The proposed method isdifferent from other conventional methods as it begins with the overestimationof the count of endmembers wherein removing the endmembers that areredundant by the means of collaborative regularization. As demonstrated bythe experimental results, proposed approach yields competitive performancecomparable with widely used methods.展开更多
Nowadays,with the rapid development of quantitative remote sensing represented by high-resolution UAV hyperspectral remote sensing observation technology,people have put forward higher requirements for the rapid prepr...Nowadays,with the rapid development of quantitative remote sensing represented by high-resolution UAV hyperspectral remote sensing observation technology,people have put forward higher requirements for the rapid preprocessing and geometric correction accuracy of hyperspectral images.The optimal geometric correction model and parameter combination of UAV hyperspectral images need to be determined to reduce unnecessary waste of time in the preprocessing and provide high-precision data support for the application of UAV hyperspectral images.In this study,the geometric correction accuracy under various geometric correction models(including affine transformation model,local triangulation model,polynomial model,direct linear transformation model,and rational function model)and resampling methods(including nearest neighbor resampling method,bilinear interpolation resampling method,and cubic convolution resampling method)were analyzed.Furthermore,the distribution,number,and accuracy of control points were analyzed based on the control variable method,and precise ground control points(GCPs)were analyzed.The results showed that the average geometric positioning error of UAV hyperspectral images(at 80 m altitude AGL)without geometric correction was as high as 3.4041 m(about 65 pixels).The optimal geometric correction model and parameter combination of the UAV hyperspectral image(at 80 m altitude AGL)used a local triangulation model,adopted a bilinear interpolation resampling method,and selected 12 edgemiddle distributed GCPs.The correction accuracy could reach 0.0493 m(less than one pixel).This study provides a reference for the geometric correction of UAV hyperspectral images.展开更多
Vegetation is crucial for wetland ecosystems.Human activities and climate changes are increasingly threatening wetland ecosystems.Combining satellite images and deep learning for classifying marsh vegetation communiti...Vegetation is crucial for wetland ecosystems.Human activities and climate changes are increasingly threatening wetland ecosystems.Combining satellite images and deep learning for classifying marsh vegetation communities has faced great challenges because of its coarse spatial resolution and limited spectral bands.This study aimed to propose a method to classify marsh vegetation using multi-resolution multispectral and hyperspectral images,combining super-resolution techniques and a novel self-constructing graph attention neural network(SGA-Net)algorithm.The SGA-Net algorithm includes a decoding layer(SCE-Net)to preciselyfine marsh vegetation classification in Honghe National Nature Reserve,Northeast China.The results indicated that the hyperspectral reconstruction images based on the super-resolution convolutional neural network(SRCNN)obtained higher accuracy with a peak signal-to-noise ratio(PSNR)of 28.87 and structural similarity(SSIM)of 0.76 in spatial quality and root mean squared error(RMSE)of 0.11 and R^(2) of 0.63 in spectral quality.The improvement of classification accuracy(MIoU)by enhanced super-resolution generative adversarial network(ESRGAN)(6.19%)was greater than that of SRCNN(4.33%)and super-resolution generative adversarial network(SRGAN)(3.64%).In most classification schemes,the SGA-Net outperformed DeepLabV3+and SegFormer algorithms for marsh vegetation and achieved the highest F1-score(78.47%).This study demonstrated that collaborative use of super-resolution reconstruction and deep learning is an effective approach for marsh vegetation mapping.展开更多
Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent ...Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent images.Hyperspectral remote sensing contains acquisition of digital images from several narrow,contiguous spectral bands throughout the visible,Thermal Infrared(TIR),Near Infrared(NIR),and Mid-Infrared(MIR)regions of the electromagnetic spectrum.In order to the application of agricultural regions,remote sensing approaches are studied and executed to their benefit of continuous and quantitativemonitoring.Particularly,hyperspectral images(HSI)are considered the precise for agriculture as they can offer chemical and physical data on vegetation.With this motivation,this article presents a novel Hurricane Optimization Algorithm with Deep Transfer Learning Driven Crop Classification(HOADTL-CC)model onHyperspectralRemote Sensing Images.The presentedHOADTL-CC model focuses on the identification and categorization of crops on hyperspectral remote sensing images.To accomplish this,the presentedHOADTL-CC model involves the design ofHOAwith capsule network(CapsNet)model for generating a set of useful feature vectors.Besides,Elman neural network(ENN)model is applied to allot proper class labels into the input HSI.Finally,glowworm swarm optimization(GSO)algorithm is exploited to fine tune the ENNparameters involved in this article.The experimental result scrutiny of the HOADTL-CC method can be tested with the help of benchmark dataset and the results are assessed under distinct aspects.Extensive comparative studies stated the enhanced performance of the HOADTL-CC model over recent approaches with maximum accuracy of 99.51%.展开更多
Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationsh...Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationships between data is limited.Graph convolutional networks(GCNs)have been introduced as an alternative,as they are effective in representing and analyzing irregular data beyond grid samplingconstraints.WhileGCNs have traditionally.been computationally intensive,minibatch GCNs(miniGCNs)enable minibatch training of large-scale GCNs.We have improved the classification performance by using miniGCNs to infer out-of-sample data without retraining the network.In addition,fuzing the capabilities of CNNs and GCNs,through concatenative fusion has been shown to improve performance compared to using CNNs or GCNs individually.Finally,support vector machine(SvM)is employed instead of softmax in the classification stage.These techniques were tested on two HS datasets and achieved an average accuracy of 92.80 using Indian Pines dataset,demonstrating the effectiveness of miniGCNs and fusion strategies.展开更多
The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production ...The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.展开更多
The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and chara...The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.展开更多
The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to reali...The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to realize the rapid calculation of data on aircraft or in orbit,which will improve the timeliness of oil spill emergency monitoring.At the same time,the combination of spectral and spatial features can improve the accuracy of oil spill monitoring.Two ground-based experiments were designed to collect measured airborne hyperspectral data of crude oil and its emulsions,for which the multiscale superpixel level group clustering framework(MSGCF)was used to select spectral feature bands with strong separability.In addition,the double-branch dual-attention(DBDA)model was applied to identify crude oil and its emulsions.Compared with the recognition results based on original hyperspectral images,using the feature bands determined by MSGCF improved the recognition accuracy,and greatly shortened the running time.Moreover,the characteristic bands for quantifying the volume concentration of water-in-oil emulsions were determined,and a quantitative inversion model was constructed and applied to the AVIRIS image of the deepwater horizon oil spill event in 2010.This study verified the effectiveness of feature bands in identifying oil spill pollution types and quantifying concentration,laying foundation for rapid identification and quantification of marine oil spills and their emulsions on aircraft or in orbit.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces...Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.展开更多
Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi...Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.展开更多
基金This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Hyperspectral imaging instruments could capture detailed spatial information and rich spectral signs of observed scenes.Much spatial information and spectral signatures of hyperspectral images(HSIs)present greater potential for detecting and classifying fine crops.The accurate classification of crop kinds utilizing hyperspectral remote sensing imaging(RSI)has become an indispensable application in the agricultural domain.It is significant for the prediction and growth monitoring of crop yields.Amongst the deep learning(DL)techniques,Convolution Neural Network(CNN)was the best method for classifying HSI for their incredible local contextual modeling ability,enabling spectral and spatial feature extraction.This article designs a Hybrid Multi-Strategy Aquila Optimization with a Deep Learning-Driven Crop Type Classification(HMAODL-CTC)algorithm onHSI.The proposed HMAODL-CTC model mainly intends to categorize different types of crops on HSI.To accomplish this,the presented HMAODL-CTC model initially carries out image preprocessing to improve image quality.In addition,the presented HMAODL-CTC model develops dilated convolutional neural network(CNN)for feature extraction.For hyperparameter tuning of the dilated CNN model,the HMAO algorithm is utilized.Eventually,the presented HMAODL-CTC model uses an extreme learning machine(ELM)model for crop type classification.A comprehensive set of simulations were performed to illustrate the enhanced performance of the presented HMAODL-CTC algorithm.Extensive comparison studies reported the improved performance of the presented HMAODL-CTC algorithm over other compared methods.
基金supported by the National Natural Science Foundation of China(No.62001023)Beijing Natural Science Foundation(No.JQ20021)。
文摘Most methods for classifying hyperspectral data only consider the local spatial relation-ship among samples,ignoring the important non-local topological relationship.However,the non-local topological relationship is better at representing the structure of hyperspectral data.This paper proposes a deep learning model called Topology and semantic information fusion classification network(TSFnet)that incorporates a topology structure and semantic information transmis-sion network to accurately classify traditional Chinese medicine in hyperspectral images.TSFnet uses a convolutional neural network(CNN)to extract features and a graph convolution network(GCN)to capture potential topological relationships among different types of Chinese herbal medicines.The results show that TSFnet outperforms other state-of-the-art deep learning classification algorithms in two different scenarios of herbal medicine datasets.Additionally,the proposed TSFnet model is lightweight and can be easily deployed for mobile herbal medicine classification.
文摘Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.
文摘Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information of each pixel in the third dimension.The classification accuracy of hyperspectral images(HSI)increases significantly by employing both spatial and spectral features.For this work,the data was acquired using an airborne hyperspectral imager system which collected HSI in the visible and near-infrared(VNIR)range of 400 to 1000 nm wavelength within 180 spectral bands.The dataset is collected for nine different crops on agricultural land with a spectral resolution of 3.3 nm wavelength for each pixel.The data was cleaned from geometric distortions and stored with the class labels and annotations of global localization using the inertial navigation system.In this study,a unique pixel-based approach was designed to improve the crops'classification accuracy by using the edge-preserving features(EPF)and principal component analysis(PCA)in conjunction.The preliminary processing generated the high-dimensional EPF stack by applying the edge-preserving filters on acquired HSI.In the second step,this high dimensional stack was treated with the PCA for dimensionality reduction without losing significant spectral information.The resultant feature space(PCA-EPF)demonstrated enhanced class separability for improved crop classification with reduced dimensionality and computational cost.The support vector machines classifier was employed for multiclass classification of target crops using PCA-EPF.The classification performance evaluation was measured in terms of individual class accuracy,overall accuracy,average accuracy,and Cohen kappa factor.The proposed scheme achieved greater than 90%results for all the performance evaluation metrics.The PCA-EPF proved to be an effective attribute for crop classification using hyperspectral imaging in the VNIR range.The proposed scheme is well-suited for practical applications of crops and landfill estimations using agricultural remote sensing methods.
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
基金This work was supported in part by the National Natural Science Foundation of China under Grant No.61801222 and No.61501522in part by the Project of Shandong Province Higher Educational Science and Technology Program under Grant No.KJ2018BAN047.
文摘The superpixel segmentation has been widely applied in many computer vision and image process applications.In recent years,amount of superpixel segmentation algorithms have been proposed.However,most of the current algorithms are designed for natural images with little noise corrupted.In order to apply the superpixel algorithms to hyperspectral images which are always seriously polluted by noise,we propose a noiseresistant superpixel segmentation(NRSS)algorithm in this paper.In the proposed NRSS,the spectral signatures are first transformed into frequency domain to enhance the noise robustness;then the two widely spectral similarity measures-spectral angle mapper(SAM)and spectral information divergence(SID)are combined to enhance the discriminability of the spectral similarity;finally,the superpixels are generated with the proposed frequency-based spectral similarity.Both qualitative and quantitative experimental results demonstrate the effectiveness of the proposed superpixel segmentation algorithm when dealing with hyperspectral images with various noise levels.Moreover,the proposed NRSS is compared with the most widely used superpixel segmentation algorithm-simple linear iterative clustering(SLIC),where the comparison results prove the superiority of the proposed superpixel segmentation algorithm.
文摘A crucial task in hyperspectral image(HSI)taxonomy is exploring effective methodologies to effusively practice the 3-D and spectral data delivered by the statistics cube.For classification of images,3-D data is adjudged in the phases of pre-cataloging,an assortment of a sample,classifiers,post-cataloging,and accurateness estimation.Lastly,a viewpoint on imminent examination directions for proceeding 3-D and spectral approaches is untaken.In topical years,sparse representation is acknowledged as a dominant classification tool to effectually labels deviating difficulties and extensively exploited in several imagery dispensation errands.Encouraged by those efficacious solicitations,sparse representation(SR)has likewise been presented to categorize HSI’s and validated virtuous enactment.This research paper offers an overview of the literature on the classification of HSI technology and its applications.This assessment is centered on a methodical review of SR and support vector machine(SVM)grounded HSI taxonomy works and equates numerous approaches for this matter.We form an outline that splits the equivalent mechanisms into spectral aspects of systems,and spectral–spatial feature networks to methodically analyze the contemporary accomplishments in HSI taxonomy.Furthermore,cogitating the datum that accessible training illustrations in the remote distinguishing arena are generally appropriate restricted besides training neural networks(NNs)to necessitate an enormous integer of illustrations,we comprise certain approaches to increase taxonomy enactment,which can deliver certain strategies for imminent learnings on this issue.Lastly,numerous illustrative neural learning-centered taxonomy approaches are piloted on physical HSI’s in our experimentations.
基金This paper was supported by the National Key Research and Development Program of China(2017YFB1104500)National Natural Science Foundation of China(61605062,61735005 and 11704155)+2 种基金Science and Technology Planning Project of Guangdong Province(2018B030323017)Research Project of Scientific Research Cultivation and Innovation Fund of Jinan University(11617329)Guangzhou Science and Technology Project(201903010042 and 201904010294).
文摘A distinguishing characteristic of normal and cancer cells is the difference in their nuclear chromatin content and distribution.This difference can be revealed by the transmission spectra of nuclei stained with a pH-sensitive stain.Here,we used hematoxylin-eosin(HE)to stain hepatic carcinoma tissues and obtained spectral-spatial data from their nuclei using hyper-spectral microscopy.The transmission spectra of the nuclei were then used to train a support vector machine(SVM)model for cell classification.Especially,we found that the chromatin distribution in cancer cells is more uniform,because of which the correlation coefficients for the spectra at different points in their nuclei are higher.Consequently,we exploited this feature to improve the SVM model.The sensitivity and specificity for the identification of cancer cells could be increased to 99%and 98%,respectively.We also designed an image-processing method for the extraction of information from cell nuclei to automate the identification process.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Spectral unmixing is essential for exploitation of remotely senseddata of Hyperspectral Images (HSI). It amounts to the identification of a position of spectral signatures that are pure and therefore called end members andtheir matching fractional, draft rules abundances for every pixel in HSI. Thispaper aims to unmix hyperspectral data using the minimal volume methodof elementary scrutiny. Moreover, the problem of optimization is solved bythe implementation of the sequence of small problems that are constrainedquadratically. The hard constraint in the final step for the abundance fractionis then replaced with a loss function of hinge type that accounts for outlinersand noise. Existing algorithms focus on estimating the endmembers (Ems)enumeration in a sight, discerning of spectral signs of EMs, besides assessmentof fractional profusion for every EM in every pixel of a sight. Nevertheless, allthe stages are performed by only a few algorithms in the process of hyperspectral unmixing. Therefore, the Non-negative Minimum Volume Factorization(NMVF) algorithm is further extended by fusing it with the nonnegativematrix of robust collaborative factorization that aims to perform all the threeunmixing chain steps for hyperspectral images. The major contributions ofthis article are in this manner: (A) it performs Simplex analysis of minimum volume for hyperspectral images with unsupervised linear unmixing isemployed. (B) The simplex analysis method is configured with an exaggeratedform of the elementary which is delivered by vertical component analysis(VCA). (C) The inflating factor is chosen carefully inactivating the constraintsin a large majority for relating to the source fractions abundance that speedsup the algorithm. (D) The final step is making simplex analysis method robustto outliners as well as noise that replaces the profusion element positivity hardrestraint by a hinge kind soft restraint, preserving the local minima havinggood quality. (E) The matrix factorization method is applied that is capable ofperforming the three major phases of the hyperspectral separation sequence.The anticipated approach can find application in a scenario where the endmembers are known in advance, however, it assumes that the endmemberscount is corresponding to an overestimated value. The proposed method isdifferent from other conventional methods as it begins with the overestimationof the count of endmembers wherein removing the endmembers that areredundant by the means of collaborative regularization. As demonstrated bythe experimental results, proposed approach yields competitive performancecomparable with widely used methods.
基金financially supported by the National Nature Science Foundation of China(Grant No.32260388)the Major Scientific and Technological Projects of the XPCC(Grant No.2017DB005)the Technology Development Guided by the Central Government(Grant No.201610011).
文摘Nowadays,with the rapid development of quantitative remote sensing represented by high-resolution UAV hyperspectral remote sensing observation technology,people have put forward higher requirements for the rapid preprocessing and geometric correction accuracy of hyperspectral images.The optimal geometric correction model and parameter combination of UAV hyperspectral images need to be determined to reduce unnecessary waste of time in the preprocessing and provide high-precision data support for the application of UAV hyperspectral images.In this study,the geometric correction accuracy under various geometric correction models(including affine transformation model,local triangulation model,polynomial model,direct linear transformation model,and rational function model)and resampling methods(including nearest neighbor resampling method,bilinear interpolation resampling method,and cubic convolution resampling method)were analyzed.Furthermore,the distribution,number,and accuracy of control points were analyzed based on the control variable method,and precise ground control points(GCPs)were analyzed.The results showed that the average geometric positioning error of UAV hyperspectral images(at 80 m altitude AGL)without geometric correction was as high as 3.4041 m(about 65 pixels).The optimal geometric correction model and parameter combination of the UAV hyperspectral image(at 80 m altitude AGL)used a local triangulation model,adopted a bilinear interpolation resampling method,and selected 12 edgemiddle distributed GCPs.The correction accuracy could reach 0.0493 m(less than one pixel).This study provides a reference for the geometric correction of UAV hyperspectral images.
基金supported by National Natural Science Foundation of China:[Grant Number 21976043,42122009]Guangxi Science&Technology Program:[Grant Number GuikeAD20159037]+1 种基金‘Ba Gui Scholars’program of the provincial government of Guangxi,and the Guilin University of Technology Foundation:[Grant Number GUTQDJJ2017096]Innovation Project of Guangxi Graduate Education:[Grant Number YCSW2022328].
文摘Vegetation is crucial for wetland ecosystems.Human activities and climate changes are increasingly threatening wetland ecosystems.Combining satellite images and deep learning for classifying marsh vegetation communities has faced great challenges because of its coarse spatial resolution and limited spectral bands.This study aimed to propose a method to classify marsh vegetation using multi-resolution multispectral and hyperspectral images,combining super-resolution techniques and a novel self-constructing graph attention neural network(SGA-Net)algorithm.The SGA-Net algorithm includes a decoding layer(SCE-Net)to preciselyfine marsh vegetation classification in Honghe National Nature Reserve,Northeast China.The results indicated that the hyperspectral reconstruction images based on the super-resolution convolutional neural network(SRCNN)obtained higher accuracy with a peak signal-to-noise ratio(PSNR)of 28.87 and structural similarity(SSIM)of 0.76 in spatial quality and root mean squared error(RMSE)of 0.11 and R^(2) of 0.63 in spectral quality.The improvement of classification accuracy(MIoU)by enhanced super-resolution generative adversarial network(ESRGAN)(6.19%)was greater than that of SRCNN(4.33%)and super-resolution generative adversarial network(SRGAN)(3.64%).In most classification schemes,the SGA-Net outperformed DeepLabV3+and SegFormer algorithms for marsh vegetation and achieved the highest F1-score(78.47%).This study demonstrated that collaborative use of super-resolution reconstruction and deep learning is an effective approach for marsh vegetation mapping.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number(25/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R303)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR28.
文摘Hyperspectral remote sensing/imaging spectroscopy is a novel approach to reaching a spectrum from all the places of a huge array of spatial places so that several spectral wavelengths are utilized for making coherent images.Hyperspectral remote sensing contains acquisition of digital images from several narrow,contiguous spectral bands throughout the visible,Thermal Infrared(TIR),Near Infrared(NIR),and Mid-Infrared(MIR)regions of the electromagnetic spectrum.In order to the application of agricultural regions,remote sensing approaches are studied and executed to their benefit of continuous and quantitativemonitoring.Particularly,hyperspectral images(HSI)are considered the precise for agriculture as they can offer chemical and physical data on vegetation.With this motivation,this article presents a novel Hurricane Optimization Algorithm with Deep Transfer Learning Driven Crop Classification(HOADTL-CC)model onHyperspectralRemote Sensing Images.The presentedHOADTL-CC model focuses on the identification and categorization of crops on hyperspectral remote sensing images.To accomplish this,the presentedHOADTL-CC model involves the design ofHOAwith capsule network(CapsNet)model for generating a set of useful feature vectors.Besides,Elman neural network(ENN)model is applied to allot proper class labels into the input HSI.Finally,glowworm swarm optimization(GSO)algorithm is exploited to fine tune the ENNparameters involved in this article.The experimental result scrutiny of the HOADTL-CC method can be tested with the help of benchmark dataset and the results are assessed under distinct aspects.Extensive comparative studies stated the enhanced performance of the HOADTL-CC model over recent approaches with maximum accuracy of 99.51%.
基金supported by Research start up fund for high level talents of FuZhou University of International Studies and Trade[grant no FWKQJ202006]2022 Guiding Project of Fujian Science and Technology Department[grant no 2022H0026].
文摘Convolutional neural networks(CNNs)have gained popularity for categorizing hyperspectral(HS)images due to their ability to capture representations of spatial-spectral features.However,their ability to model relationships between data is limited.Graph convolutional networks(GCNs)have been introduced as an alternative,as they are effective in representing and analyzing irregular data beyond grid samplingconstraints.WhileGCNs have traditionally.been computationally intensive,minibatch GCNs(miniGCNs)enable minibatch training of large-scale GCNs.We have improved the classification performance by using miniGCNs to infer out-of-sample data without retraining the network.In addition,fuzing the capabilities of CNNs and GCNs,through concatenative fusion has been shown to improve performance compared to using CNNs or GCNs individually.Finally,support vector machine(SvM)is employed instead of softmax in the classification stage.These techniques were tested on two HS datasets and achieved an average accuracy of 92.80 using Indian Pines dataset,demonstrating the effectiveness of miniGCNs and fusion strategies.
基金supported by the Thailand Research Fund through the Royal Golden Jubilee Ph.D.Program(PHD/0225/2561)the Faculty of Engineering,Kamphaeng Saen Campus,Kasetsart University,Thailand。
文摘The adulteration concentration of palm kernel oil(PKO)in virgin coconut oil(VCO)was quantified using near-infrared(NIR)hyperspectral imaging.Nowadays,some VCO is adulterated with lower-priced PKO to reduce production costs,which diminishes the quality of the VCO.This study used NIR hyperspectral imaging in the wavelength region 900-1,650 nm to create a quantitative model for the detection of PKO contaminants(0-100%)in VCO and to develop predictive mapping.The prediction equation for the adulteration of VCO with PKO was constructed using the partial least squares regression method.The best predictive model was pre-processed using the standard normal variate method,and the coefficient of determination of prediction was 0.991,the root mean square error of prediction was 2.93%,and the residual prediction deviation was 10.37.The results showed that this model could be applied for quantifying the adulteration concentration of PKO in VCO.The prediction adulteration concentration mapping of VCO with PKO was created from a calibration model that showed the color level according to the adulteration concentration in the range of 0-100%.NIR hyperspectral imaging could be clearly used to quantify the adulteration of VCO with a color level map that provides a quick,accurate,and non-destructive detection method.
基金supported by the National Key R&D Program of China(Grant Nos.2021YFB3901403 and 2023YFC3007203).
文摘The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.
基金Supported by the National Natural Science Foundation of China(Nos.42206177,U1906217)the Shandong Provincial Natural Science Foundation(No.ZR2022QD075)the Fundamental Research Funds for the Central Universities(No.21CX06057A)。
文摘The accurate identification of marine oil spills and their emulsions is of great significance for emergency response to oil spill pollution.The selection of characteristic bands with strong separability helps to realize the rapid calculation of data on aircraft or in orbit,which will improve the timeliness of oil spill emergency monitoring.At the same time,the combination of spectral and spatial features can improve the accuracy of oil spill monitoring.Two ground-based experiments were designed to collect measured airborne hyperspectral data of crude oil and its emulsions,for which the multiscale superpixel level group clustering framework(MSGCF)was used to select spectral feature bands with strong separability.In addition,the double-branch dual-attention(DBDA)model was applied to identify crude oil and its emulsions.Compared with the recognition results based on original hyperspectral images,using the feature bands determined by MSGCF improved the recognition accuracy,and greatly shortened the running time.Moreover,the characteristic bands for quantifying the volume concentration of water-in-oil emulsions were determined,and a quantitative inversion model was constructed and applied to the AVIRIS image of the deepwater horizon oil spill event in 2010.This study verified the effectiveness of feature bands in identifying oil spill pollution types and quantifying concentration,laying foundation for rapid identification and quantification of marine oil spills and their emulsions on aircraft or in orbit.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
基金the Researchers Supporting Project number(RSPD2024R848),King Saud University,Riyadh,Saudi Arabia.
文摘Disjoint sampling is critical for rigorous and unbiased evaluation of state-of-the-art(SOTA)models e.g.,Attention Graph and Vision Transformer.When training,validation,and test sets overlap or share data,it introduces a bias that inflates performance metrics and prevents accurate assessment of a model’s true ability to generalize to new examples.This paper presents an innovative disjoint sampling approach for training SOTA models for the Hyperspectral Image Classification(HSIC).By separating training,validation,and test data without overlap,the proposed method facilitates a fairer evaluation of how well a model can classify pixels it was not exposed to during training or validation.Experiments demonstrate the approach significantly improves a model’s generalization compared to alternatives that include training and validation data in test data(A trivial approach involves testing the model on the entire Hyperspectral dataset to generate the ground truth maps.This approach produces higher accuracy but ultimately results in low generalization performance).Disjoint sampling eliminates data leakage between sets and provides reliable metrics for benchmarking progress in HSIC.Disjoint sampling is critical for advancing SOTA models and their real-world application to large-scale land mapping with Hyperspectral sensors.Overall,with the disjoint test set,the performance of the deep models achieves 96.36%accuracy on Indian Pines data,99.73%on Pavia University data,98.29%on University of Houston data,99.43%on Botswana data,and 99.88%on Salinas data.
基金supported by the UC-National Lab In-Residence Graduate Fellowship Grant L21GF3606supported by a DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowship+1 种基金supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20170668PRD1 and 20210213ERsupported by the NGA under Contract No.HM04762110003.
文摘Graph learning,when used as a semi-supervised learning(SSL)method,performs well for classification tasks with a low label rate.We provide a graph-based batch active learning pipeline for pixel/patch neighborhood multi-or hyperspectral image segmentation.Our batch active learning approach selects a collection of unlabeled pixels that satisfy a graph local maximum constraint for the active learning acquisition function that determines the relative importance of each pixel to the classification.This work builds on recent advances in the design of novel active learning acquisition functions(e.g.,the Model Change approach in arXiv:2110.07739)while adding important further developments including patch-neighborhood image analysis and batch active learning methods to further increase the accuracy and greatly increase the computational efficiency of these methods.In addition to improvements in the accuracy,our approach can greatly reduce the number of labeled pixels needed to achieve the same level of the accuracy based on randomly selected labeled pixels.