The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high...The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high- dimensional spectral measurements are organized by the affinity graph where each node in this graph only connects to its local neighbors and each edge in this graph represents local similarity information. By normalizing the affinity graph appropriately, the diffusion operator of the underlying hyperspectral imagery is well-defined, which means that the Markov random walk can be simulated on the hyperspectral imagery. Therefore, the diffusion geometric coordinates, derived from the eigenfunctions and the associated eigenvalues of the diffusion operator, can capture the intrinsic geometric information of the hyperspectral imagery well, which gives more enhanced representation results than traditional linear methods, such as principal component analysis based methods. For large-scale full scene hyperspectral imagery, by exploiting the backbone approach, the computation complexity and the memory requirements are acceptable. Experiments also show that selecting suitable symmetrization normalization techniques while forming the diffusion operator is important to hyperspectral imagery representation.展开更多
The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. ...The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. The virtual dimensionality is introduced to determine the number of dimensions needed to be preserved. Since there is no prioritization among independent components generated by the FastICA,the mixing matrix of FastICA is initialized by endmembers,which were extracted by using unsupervised maximum distance method. Minimum Noise Fraction (MNF) is used for preprocessing of original data,which can reduce the computational complexity of FastICA significantly. Finally,FastICA is performed on the selected principal components acquired by MNF to generate the expected independent components in accordance with the order of endmembers. Experimental results demonstrate that the proposed method outperforms second-order statistics-based transforms such as principle components analysis.展开更多
The estimation of oil spill coverage is an important part of monitoring of oil spills at sea.The spatial resolution of images collected by airborne hyper-spectral remote sensing limits both the detection of oil spills...The estimation of oil spill coverage is an important part of monitoring of oil spills at sea.The spatial resolution of images collected by airborne hyper-spectral remote sensing limits both the detection of oil spills and the accuracy of estimates of their size.We consider at-sea oil spills with zonal distribution in this paper and improve the traditional independent component analysis algorithm.For each independent component we added two constraint conditions:non-negativity and constant sum.We use priority weighting by higher-order statistics,and then the spectral angle match method to overcome the order nondeterminacy.By these steps,endmembers can be extracted and abundance quantified simultaneously.To examine the coverage of a real oil spill and correct our estimate,a simulation experiment and a real experiment were designed using the algorithm described above.The result indicated that,for the simulation data,the abundance estimation error is 2.52% and minimum root mean square error of the reconstructed image is 0.030 6.We estimated the oil spill rate and area based on eight hyper-spectral remote sensing images collected by an airborne survey of Shandong Changdao in 2011.The total oil spill area was 0.224 km^2,and the oil spill rate was 22.89%.The method we demonstrate in this paper can be used for the automatic monitoring of oil spill coverage rates.It also allows the accurate estimation of the oil spill area.展开更多
With the development of sensors,the application of multi-source remote sensing data has been widely concerned.Since hyperspectral image(HSI)contains rich spectral information while light detection and ranging(LiDAR)da...With the development of sensors,the application of multi-source remote sensing data has been widely concerned.Since hyperspectral image(HSI)contains rich spectral information while light detection and ranging(LiDAR)data contains elevation information,joint use of them for ground object classification can yield positive results,especially by building deep networks.Fortu-nately,multi-scale deep networks allow to expand the receptive fields of convolution without causing the computational and training problems associated with simply adding more network layers.In this work,a multi-scale feature fusion network is proposed for the joint classification of HSI and LiDAR data.First,we design a multi-scale spatial feature extraction module with cross-channel connections,by which spatial information of HSI data and elevation information of LiDAR data are extracted and fused.In addition,a multi-scale spectral feature extraction module is employed to extract the multi-scale spectral features of HSI data.Finally,joint multi-scale features are obtained by weighting and concatenation operations and then fed into the classifier.To verify the effective-ness of the proposed network,experiments are carried out on the MUUFL Gulfport and Trento datasets.The experimental results demonstrate that the classification performance of the proposed method is superior to that of other state-of-the-art methods.展开更多
Masting is a well-marked variation in yields of oak forests. In Japan, this phenomenon is also related to wildlife management and oak regeneration practices. This study demonstrates the capability of integrating remot...Masting is a well-marked variation in yields of oak forests. In Japan, this phenomenon is also related to wildlife management and oak regeneration practices. This study demonstrates the capability of integrating remote sensing techniques into map- ping spatial variation of acorn production. The hyperspectral images in 72 wavelengths (407-898 nm) were acquired over the study area ten times over a period of three years (2003-2005) during the early growing season of Quercus serrata using the Airborne Im- aging Spectrometer Application (AISA) Eagle System. With the canopy spectral reflectance values of 22 sample trees extracted from the images, yield estimation models were developed via multiple linear regression (MLR) analyses. Using the object-oriented classi- fication approach in eCognition, canopies representative of individual oak trees (Q. serrata) were identified from the corresponding hyperspectral imagery and combined with the fitted estimation models developed, acorn yield over the entire forest were estimated and visualized into maps. Three estimation models, obtained for June 27 in 2003, July 13 in 2004 and June 21 in 2005, showed good performance in acorn yield estimation both for the training and validation datasets, all with R2 〉 0.4, p 〈 0.05 and RRMSE 〈 1 (the relative root mean square of error). The present study shows the potential of airborne hyperspectral imagery not only in estimating acorn yields during early growing seasons, but also in identifying Q. serrata from other image objects, based on which of the spatial distribution patterns of acorn production over large areas could be mapped. The yield map can provide within-stand abundance and valuable information for the size and spatial synchrony of acorn production.展开更多
Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
The acquired hyperspectral images (HSIs) are inherently attected by noise wlm Dano-varylng level, which cannot be removed easily by current approaches. In this study, a new denoising method is proposed for removing ...The acquired hyperspectral images (HSIs) are inherently attected by noise wlm Dano-varylng level, which cannot be removed easily by current approaches. In this study, a new denoising method is proposed for removing such kind of noise by smoothing spectral signals in the transformed multi- scale domain. Specifically, the proposed method includes three procedures: 1 ) applying a discrete wavelet transform (DWT) to each band; 2) performing cubic spline smoothing on each noisy coeffi- cient vector along the spectral axis; 3 ) reconstructing each band by an inverse DWT. In order to adapt to the band-varying noise statistics of HSIs, the noise covariance is estimated to control the smoothing degree at different spectra| positions. Generalized cross validation (GCV) is employed to choose the smoothing parameter during the optimization. The experimental results on simulated and real HSIs demonstrate that the proposed method can be well adapted to band-varying noise statistics of noisy HSIs and also can well preserve the spectral and spatial features.展开更多
Nitrogen(N)as a pivotal factor in influencing the growth,development,and yield of maize.Monitoring the N status of maize rapidly and non-destructive and real-time is meaningful in fertilization management of agricultu...Nitrogen(N)as a pivotal factor in influencing the growth,development,and yield of maize.Monitoring the N status of maize rapidly and non-destructive and real-time is meaningful in fertilization management of agriculture,based on unmanned aerial vehicle(UAV)remote sensing technology.In this study,the hyperspectral images were acquired by UAV and the leaf nitrogen content(LNC)and leaf nitrogen accumulation(LNA)were measured to estimate the N nutrition status of maize.24 vegetation indices(VIs)were constructed using hyperspectral images,and four prediction models were used to estimate the LNC and LNA of maize.The models include a single linear regression model,multivariable linear regression(MLR)model,random forest regression(RFR)model,and support vector regression(SVR)model.Moreover,the model with the highest prediction accuracy was applied to invert the LNC and LNA of maize in breeding fields.The results of the single linear regression model with 24 VIs showed that normalized difference chlorophyll(NDchl)had the highest prediction accuracy for LNC(R^(2),RMSE,and RE were 0.72,0.21,and 12.19%,respectively)and LNA(R^(2),RMSE,and RE were 0.77,0.26,and 14.34%,respectively).And then,24 VIs were divided into 13 important VIs and 11 unimportant VIs.Three prediction models for LNC and LNA were constructed using 13 important VIs,and the results showed that RFR and SVR models significantly enhanced the prediction accuracy of LNC and LNA compared to the multivariable linear regression model,in which RFR model had the highest prediction accuracy for the validation dataset of LNC(R^(2),RMSE,and RE were 0.78,0.16,and 8.83%,respectively)and LNA(R^(2),RMSE,and RE were 0.85,0.19,and 9.88%,respectively).This study provides a theoretical basis for N diagnosis and precise management of crop production based on hyperspectral remote sensing in precision agriculture.展开更多
For anomaly detection,anomalies existing in the background will affect the detection performance.Accordingly,a background refinement method based on the local density is proposed to remove the anomalies from thebackgr...For anomaly detection,anomalies existing in the background will affect the detection performance.Accordingly,a background refinement method based on the local density is proposed to remove the anomalies from thebackground.In this work,the local density is measured by its spectral neighbors through a certain radius which is obtained by calculating the mean median of the distance matrix.Further,a two-step segmentation strategy is designed.The first segmentation step divides the original background into two subsets,a large subset composed by background pixels and a small subset containing both background pixels and anomalies.The second segmentation step employing Otsu method with an aim to obtain a discrimination threshold is conducted on the small subset.Then the pixels whose local densities are lower than the threshold are removed.Finally,to validate the effectiveness of the proposed method,it combines Reed-Xiaoli detector and collaborative-representation-based detector to detect anomalies.Experiments are conducted on two real hyperspectral datasets.Results show that the proposed method achieves better detection performance.展开更多
Due to the advancements in remote sensing technologies,the generation of hyperspectral imagery(HSI)gets significantly increased.Accurate classification of HSI becomes a critical process in the domain of hyperspectral ...Due to the advancements in remote sensing technologies,the generation of hyperspectral imagery(HSI)gets significantly increased.Accurate classification of HSI becomes a critical process in the domain of hyperspectral data analysis.The massive availability of spectral and spatial details of HSI has offered a great opportunity to efficiently illustrate and recognize ground materials.Presently,deep learning(DL)models particularly,convolutional neural networks(CNNs)become useful for HSI classification owing to the effective feature representation and high performance.In this view,this paper introduces a new DL based Xception model for HSI analysis and classification,called Xcep-HSIC model.Initially,the presented model utilizes a feature relation map learning(FRML)to identify the relationship among the hyperspectral features and explore many features for improved classifier results.Next,the DL based Xception model is applied as a feature extractor to derive a useful set of features from the FRML map.In addition,kernel extreme learning machine(KELM)optimized by quantum-behaved particle swarm optimization(QPSO)is employed as a classification model,to identify the different set of class labels.An extensive set of simulations takes place on two benchmarks HSI dataset,namely Indian Pines and Pavia University dataset.The obtained results ensured the effective performance of the XcepHSIC technique over the existing methods by attaining a maximum accuracy of 94.32%and 92.67%on the applied India Pines and Pavia University dataset respectively.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary l...Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary learning method-based unmixing methods are found to be short of robustness in noisy contexts.To improve the performance,this study specifically puts forward a new unsupervised spectral unmixing solution.For the reason that the solution only functions in a condition that both endmembers and the abundances meet non-negative con-straints,a model is built to solve the unsupervised spectral un-mixing problem on the account of the dictionary learning me-thod.To raise the screening accuracy of final members,a new form of the target function is introduced into dictionary learning practice,which is conducive to the growing robustness of noisy HSI statistics.Then,by introducing the total variation(TV)terms into the proposed spectral unmixing based on robust nonnega-tive dictionary learning(RNDLSU),the context information under HSI space is to be cited as prior knowledge to compute the abundances when performing sparse unmixing operations.Ac-cording to the final results of the experiment,this method makes favorable performance under varying noise conditions,which is especially true under low signal to noise conditions.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
基金The National Key Technologies R & D Program during the 11th Five-Year Plan Period (No.2006BAB15B01)
文摘The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high- dimensional spectral measurements are organized by the affinity graph where each node in this graph only connects to its local neighbors and each edge in this graph represents local similarity information. By normalizing the affinity graph appropriately, the diffusion operator of the underlying hyperspectral imagery is well-defined, which means that the Markov random walk can be simulated on the hyperspectral imagery. Therefore, the diffusion geometric coordinates, derived from the eigenfunctions and the associated eigenvalues of the diffusion operator, can capture the intrinsic geometric information of the hyperspectral imagery well, which gives more enhanced representation results than traditional linear methods, such as principal component analysis based methods. For large-scale full scene hyperspectral imagery, by exploiting the backbone approach, the computation complexity and the memory requirements are acceptable. Experiments also show that selecting suitable symmetrization normalization techniques while forming the diffusion operator is important to hyperspectral imagery representation.
基金Supported by the National Natural Science Foundation of China (No. 60572135)
文摘The high dimensions of hyperspectral imagery have caused burden for further processing. A new Fast Independent Component Analysis (FastICA) approach to dimensionality reduction for hyperspectral imagery is presented. The virtual dimensionality is introduced to determine the number of dimensions needed to be preserved. Since there is no prioritization among independent components generated by the FastICA,the mixing matrix of FastICA is initialized by endmembers,which were extracted by using unsupervised maximum distance method. Minimum Noise Fraction (MNF) is used for preprocessing of original data,which can reduce the computational complexity of FastICA significantly. Finally,FastICA is performed on the selected principal components acquired by MNF to generate the expected independent components in accordance with the order of endmembers. Experimental results demonstrate that the proposed method outperforms second-order statistics-based transforms such as principle components analysis.
基金Supported by the National Scientific Research Fund of China(No.31201133)
文摘The estimation of oil spill coverage is an important part of monitoring of oil spills at sea.The spatial resolution of images collected by airborne hyper-spectral remote sensing limits both the detection of oil spills and the accuracy of estimates of their size.We consider at-sea oil spills with zonal distribution in this paper and improve the traditional independent component analysis algorithm.For each independent component we added two constraint conditions:non-negativity and constant sum.We use priority weighting by higher-order statistics,and then the spectral angle match method to overcome the order nondeterminacy.By these steps,endmembers can be extracted and abundance quantified simultaneously.To examine the coverage of a real oil spill and correct our estimate,a simulation experiment and a real experiment were designed using the algorithm described above.The result indicated that,for the simulation data,the abundance estimation error is 2.52% and minimum root mean square error of the reconstructed image is 0.030 6.We estimated the oil spill rate and area based on eight hyper-spectral remote sensing images collected by an airborne survey of Shandong Changdao in 2011.The total oil spill area was 0.224 km^2,and the oil spill rate was 22.89%.The method we demonstrate in this paper can be used for the automatic monitoring of oil spill coverage rates.It also allows the accurate estimation of the oil spill area.
基金supported by the National Key Research and Development Project(No.2020YFC1512000)the General Projects of Key R&D Programs in Shaanxi Province(No.2020GY-060)Xi’an Science&Technology Project(No.2020KJRC 0126)。
文摘With the development of sensors,the application of multi-source remote sensing data has been widely concerned.Since hyperspectral image(HSI)contains rich spectral information while light detection and ranging(LiDAR)data contains elevation information,joint use of them for ground object classification can yield positive results,especially by building deep networks.Fortu-nately,multi-scale deep networks allow to expand the receptive fields of convolution without causing the computational and training problems associated with simply adding more network layers.In this work,a multi-scale feature fusion network is proposed for the joint classification of HSI and LiDAR data.First,we design a multi-scale spatial feature extraction module with cross-channel connections,by which spatial information of HSI data and elevation information of LiDAR data are extracted and fused.In addition,a multi-scale spectral feature extraction module is employed to extract the multi-scale spectral features of HSI data.Finally,joint multi-scale features are obtained by weighting and concatenation operations and then fed into the classifier.To verify the effective-ness of the proposed network,experiments are carried out on the MUUFL Gulfport and Trento datasets.The experimental results demonstrate that the classification performance of the proposed method is superior to that of other state-of-the-art methods.
基金supported by the Japan Society for the Promotion of Science (JSPS) through its grant-in-aid for scientific research projects (No. 14360148)
文摘Masting is a well-marked variation in yields of oak forests. In Japan, this phenomenon is also related to wildlife management and oak regeneration practices. This study demonstrates the capability of integrating remote sensing techniques into map- ping spatial variation of acorn production. The hyperspectral images in 72 wavelengths (407-898 nm) were acquired over the study area ten times over a period of three years (2003-2005) during the early growing season of Quercus serrata using the Airborne Im- aging Spectrometer Application (AISA) Eagle System. With the canopy spectral reflectance values of 22 sample trees extracted from the images, yield estimation models were developed via multiple linear regression (MLR) analyses. Using the object-oriented classi- fication approach in eCognition, canopies representative of individual oak trees (Q. serrata) were identified from the corresponding hyperspectral imagery and combined with the fitted estimation models developed, acorn yield over the entire forest were estimated and visualized into maps. Three estimation models, obtained for June 27 in 2003, July 13 in 2004 and June 21 in 2005, showed good performance in acorn yield estimation both for the training and validation datasets, all with R2 〉 0.4, p 〈 0.05 and RRMSE 〈 1 (the relative root mean square of error). The present study shows the potential of airborne hyperspectral imagery not only in estimating acorn yields during early growing seasons, but also in identifying Q. serrata from other image objects, based on which of the spatial distribution patterns of acorn production over large areas could be mapped. The yield map can provide within-stand abundance and valuable information for the size and spatial synchrony of acorn production.
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
基金Supported by the National Natural Science Foundation of China(No.60972126,60921061)the State Key Program of National Natural Science of China(No.61032007)
文摘The acquired hyperspectral images (HSIs) are inherently attected by noise wlm Dano-varylng level, which cannot be removed easily by current approaches. In this study, a new denoising method is proposed for removing such kind of noise by smoothing spectral signals in the transformed multi- scale domain. Specifically, the proposed method includes three procedures: 1 ) applying a discrete wavelet transform (DWT) to each band; 2) performing cubic spline smoothing on each noisy coeffi- cient vector along the spectral axis; 3 ) reconstructing each band by an inverse DWT. In order to adapt to the band-varying noise statistics of HSIs, the noise covariance is estimated to control the smoothing degree at different spectra| positions. Generalized cross validation (GCV) is employed to choose the smoothing parameter during the optimization. The experimental results on simulated and real HSIs demonstrate that the proposed method can be well adapted to band-varying noise statistics of noisy HSIs and also can well preserve the spectral and spatial features.
基金financially supported by the Hainan Province Science and Technology Special Fund(Grant No.ZDYF2021GXJS038 and Grant No.ZDYF2024XDNY196)Hainan Provincial Natural Science Foundation of China(Grant No.320RC486)the National Natural Science Foundation of China(Grant No.42167011).
文摘Nitrogen(N)as a pivotal factor in influencing the growth,development,and yield of maize.Monitoring the N status of maize rapidly and non-destructive and real-time is meaningful in fertilization management of agriculture,based on unmanned aerial vehicle(UAV)remote sensing technology.In this study,the hyperspectral images were acquired by UAV and the leaf nitrogen content(LNC)and leaf nitrogen accumulation(LNA)were measured to estimate the N nutrition status of maize.24 vegetation indices(VIs)were constructed using hyperspectral images,and four prediction models were used to estimate the LNC and LNA of maize.The models include a single linear regression model,multivariable linear regression(MLR)model,random forest regression(RFR)model,and support vector regression(SVR)model.Moreover,the model with the highest prediction accuracy was applied to invert the LNC and LNA of maize in breeding fields.The results of the single linear regression model with 24 VIs showed that normalized difference chlorophyll(NDchl)had the highest prediction accuracy for LNC(R^(2),RMSE,and RE were 0.72,0.21,and 12.19%,respectively)and LNA(R^(2),RMSE,and RE were 0.77,0.26,and 14.34%,respectively).And then,24 VIs were divided into 13 important VIs and 11 unimportant VIs.Three prediction models for LNC and LNA were constructed using 13 important VIs,and the results showed that RFR and SVR models significantly enhanced the prediction accuracy of LNC and LNA compared to the multivariable linear regression model,in which RFR model had the highest prediction accuracy for the validation dataset of LNC(R^(2),RMSE,and RE were 0.78,0.16,and 8.83%,respectively)and LNA(R^(2),RMSE,and RE were 0.85,0.19,and 9.88%,respectively).This study provides a theoretical basis for N diagnosis and precise management of crop production based on hyperspectral remote sensing in precision agriculture.
基金Projects(61405041,61571145)supported by the National Natural Science Foundation of ChinaProject(ZD201216)supported by the Key Program of Heilongjiang Natural Science Foundation,China+1 种基金Project(RC2013XK009003)supported by Program Excellent Academic Leaders of Harbin,ChinaProject(HEUCF1508)supported by the Fundamental Research Funds for the Central Universities,China
文摘For anomaly detection,anomalies existing in the background will affect the detection performance.Accordingly,a background refinement method based on the local density is proposed to remove the anomalies from thebackground.In this work,the local density is measured by its spectral neighbors through a certain radius which is obtained by calculating the mean median of the distance matrix.Further,a two-step segmentation strategy is designed.The first segmentation step divides the original background into two subsets,a large subset composed by background pixels and a small subset containing both background pixels and anomalies.The second segmentation step employing Otsu method with an aim to obtain a discrimination threshold is conducted on the small subset.Then the pixels whose local densities are lower than the threshold are removed.Finally,to validate the effectiveness of the proposed method,it combines Reed-Xiaoli detector and collaborative-representation-based detector to detect anomalies.Experiments are conducted on two real hyperspectral datasets.Results show that the proposed method achieves better detection performance.
文摘Due to the advancements in remote sensing technologies,the generation of hyperspectral imagery(HSI)gets significantly increased.Accurate classification of HSI becomes a critical process in the domain of hyperspectral data analysis.The massive availability of spectral and spatial details of HSI has offered a great opportunity to efficiently illustrate and recognize ground materials.Presently,deep learning(DL)models particularly,convolutional neural networks(CNNs)become useful for HSI classification owing to the effective feature representation and high performance.In this view,this paper introduces a new DL based Xception model for HSI analysis and classification,called Xcep-HSIC model.Initially,the presented model utilizes a feature relation map learning(FRML)to identify the relationship among the hyperspectral features and explore many features for improved classifier results.Next,the DL based Xception model is applied as a feature extractor to derive a useful set of features from the FRML map.In addition,kernel extreme learning machine(KELM)optimized by quantum-behaved particle swarm optimization(QPSO)is employed as a classification model,to identify the different set of class labels.An extensive set of simulations takes place on two benchmarks HSI dataset,namely Indian Pines and Pavia University dataset.The obtained results ensured the effective performance of the XcepHSIC technique over the existing methods by attaining a maximum accuracy of 94.32%and 92.67%on the applied India Pines and Pavia University dataset respectively.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金supported by the National Natural Science Foundation of China(61801513).
文摘Considering the sparsity of hyperspectral images(HSIs),dictionary learning frameworks have been widely used in the field of unsupervised spectral unmixing.However,it is worth mentioning here that existing dictionary learning method-based unmixing methods are found to be short of robustness in noisy contexts.To improve the performance,this study specifically puts forward a new unsupervised spectral unmixing solution.For the reason that the solution only functions in a condition that both endmembers and the abundances meet non-negative con-straints,a model is built to solve the unsupervised spectral un-mixing problem on the account of the dictionary learning me-thod.To raise the screening accuracy of final members,a new form of the target function is introduced into dictionary learning practice,which is conducive to the growing robustness of noisy HSI statistics.Then,by introducing the total variation(TV)terms into the proposed spectral unmixing based on robust nonnega-tive dictionary learning(RNDLSU),the context information under HSI space is to be cited as prior knowledge to compute the abundances when performing sparse unmixing operations.Ac-cording to the final results of the experiment,this method makes favorable performance under varying noise conditions,which is especially true under low signal to noise conditions.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.