Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can ...A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素...为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素法、Freeman链码和改进的最小外接矩形法来测量冰雹颗粒的面积、周长和直径并与相关算法进行了实验比较。实验结果表明该方法精准度高、相关性较好,冰雹周长、面积和直径的均方根误差(Root Mean Square Error,RMSE)分别为0.2081 cm、0.2124 cm 2和0.9314 cm,决定系数(R 2)分别为0.8814、0.8736和0.9314,测量值平均误差在2%~6%之间。研究结果可为冰雹灾害相关的研究者提供准确的数据参考。展开更多
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
文摘A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.
文摘为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素法、Freeman链码和改进的最小外接矩形法来测量冰雹颗粒的面积、周长和直径并与相关算法进行了实验比较。实验结果表明该方法精准度高、相关性较好,冰雹周长、面积和直径的均方根误差(Root Mean Square Error,RMSE)分别为0.2081 cm、0.2124 cm 2和0.9314 cm,决定系数(R 2)分别为0.8814、0.8736和0.9314,测量值平均误差在2%~6%之间。研究结果可为冰雹灾害相关的研究者提供准确的数据参考。