A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can ...A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.展开更多
为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素...为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素法、Freeman链码和改进的最小外接矩形法来测量冰雹颗粒的面积、周长和直径并与相关算法进行了实验比较。实验结果表明该方法精准度高、相关性较好,冰雹周长、面积和直径的均方根误差(Root Mean Square Error,RMSE)分别为0.2081 cm、0.2124 cm 2和0.9314 cm,决定系数(R 2)分别为0.8814、0.8736和0.9314,测量值平均误差在2%~6%之间。研究结果可为冰雹灾害相关的研究者提供准确的数据参考。展开更多
Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
文摘A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.
文摘为了促进冰雹灾害等级评估工作的高效进行,提出一种测量冰雹特征参数的方法。该方法统计冰雹图片的HSI颜色范围,用先验的颜色范围来分割出冰雹并通过形态学滤除噪声,针对冰雹颗粒粘连情况运用基于分水岭算法分割边界。分别采用统计像素法、Freeman链码和改进的最小外接矩形法来测量冰雹颗粒的面积、周长和直径并与相关算法进行了实验比较。实验结果表明该方法精准度高、相关性较好,冰雹周长、面积和直径的均方根误差(Root Mean Square Error,RMSE)分别为0.2081 cm、0.2124 cm 2和0.9314 cm,决定系数(R 2)分别为0.8814、0.8736和0.9314,测量值平均误差在2%~6%之间。研究结果可为冰雹灾害相关的研究者提供准确的数据参考。
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.