he objective of the research is to develop a fast procedure for segmenting typical videophone images. In this paper, a new approach to color image segmentation based on HSI(Hue, Saturation, Intensity) color model is r...he objective of the research is to develop a fast procedure for segmenting typical videophone images. In this paper, a new approach to color image segmentation based on HSI(Hue, Saturation, Intensity) color model is reported. It is in contrast to the conventional approaches by using the three components of HSI color model in succession. This strategy makes the segmentation procedure much fast and effective. Experimental results with typical “headandshoulders” real images taken from videophone sequences show that the new appproach can fulfill the application requirements.展开更多
The traditional single image dehazing algorithm is susceptible to the prior knowledge of hazy image and colour distortion.A new method of deep learning multi-scale convolution neural network based on HSI colour space ...The traditional single image dehazing algorithm is susceptible to the prior knowledge of hazy image and colour distortion.A new method of deep learning multi-scale convolution neural network based on HSI colour space for single image dehazing is proposed in this paper,which directly learns the mapping relationship between hazy image and corresponding clear image in colour,saturation and brightness by the designed structure of deep learning network to achieve haze removal.Firstly,the hazy image is transformed from RGB colour space to HSI colour space.Secondly,an end-to-end multi-scale full convolution neural network model is designed.The multi-scale extraction is realized by three different dehazing sub-networks:hue H,saturation S and intensity I,and the mapping relationship between hazy image and clear image is obtained by deep learning.Finally,the model was trained and tested with hazy data set.The experimental results show that this method can achieve good dehazing effect for both synthetic hazy images and real hazy images,and is superior to other contrast algorithms in subjective and objective evaluations.展开更多
A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can ...A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.展开更多
Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed...Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.展开更多
Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(H...Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
文摘he objective of the research is to develop a fast procedure for segmenting typical videophone images. In this paper, a new approach to color image segmentation based on HSI(Hue, Saturation, Intensity) color model is reported. It is in contrast to the conventional approaches by using the three components of HSI color model in succession. This strategy makes the segmentation procedure much fast and effective. Experimental results with typical “headandshoulders” real images taken from videophone sequences show that the new appproach can fulfill the application requirements.
基金National Natural Science Foundation of China(No.61963023)MOE(Ministry of Education in China)Project of Humanities and Social Sciences(No.19YJC760012)。
文摘The traditional single image dehazing algorithm is susceptible to the prior knowledge of hazy image and colour distortion.A new method of deep learning multi-scale convolution neural network based on HSI colour space for single image dehazing is proposed in this paper,which directly learns the mapping relationship between hazy image and corresponding clear image in colour,saturation and brightness by the designed structure of deep learning network to achieve haze removal.Firstly,the hazy image is transformed from RGB colour space to HSI colour space.Secondly,an end-to-end multi-scale full convolution neural network model is designed.The multi-scale extraction is realized by three different dehazing sub-networks:hue H,saturation S and intensity I,and the mapping relationship between hazy image and clear image is obtained by deep learning.Finally,the model was trained and tested with hazy data set.The experimental results show that this method can achieve good dehazing effect for both synthetic hazy images and real hazy images,and is superior to other contrast algorithms in subjective and objective evaluations.
文摘A pan-sharpen technique artificially produces a high-resolution image by image fusion techniques using high-resolution panchromatic and low-resolution multispectral images. Thus, the appearance of the color image can improve. In this paper, the effectiveness of three pan-sharpening methods based on the HSI transform approach is investigated. Three models are the hexcone, double hexcones, and Haydn’s approach. Furthermore, the effect of smoothing the low-resolution multispectral image is also investigated. The smoothing techniques are the Gaussian filter and the bilateral filter. The experimental results show that Haydn’s model is superior to others. The effectiveness of smoothing the low-resolution multispectral image is also shown.
基金National Natural Foundation of China(No.41971279)Fundamental Research Funds of the Central Universities(No.B200202012)。
文摘Low-Rank and Sparse Representation(LRSR)method has gained popularity in Hyperspectral Image(HSI)processing.However,existing LRSR models rarely exploited spectral-spatial classification of HSI.In this paper,we proposed a novel Low-Rank and Sparse Representation with Adaptive Neighborhood Regularization(LRSR-ANR)method for HSI classification.In the proposed method,we first represent the hyperspectral data via LRSR since it combines both sparsity and low-rankness to maintain global and local data structures simultaneously.The LRSR is optimized by using a mixed Gauss-Seidel and Jacobian Alternating Direction Method of Multipliers(M-ADMM),which converges faster than ADMM.Then to incorporate the spatial information,an ANR scheme is designed by combining Euclidean and Cosine distance metrics to reduce the mixed pixels within a neighborhood.Lastly,the predicted labels are determined by jointly considering the homogeneous pixels in the classification rule of the minimum reconstruction error.Experimental results based on three popular hyperspectral images demonstrate that the proposed method outperforms other related methods in terms of classification accuracy and generalization performance.
基金National Natural Science Foundation of China(No.62001098)Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.2232020D-33)。
文摘Deep learning(DL)has shown its superior performance in dealing with various computer vision tasks in recent years.As a simple and effective DL model,autoencoder(AE)is popularly used to decompose hyperspectral images(HSIs)due to its powerful ability of feature extraction and data reconstruction.However,most existing AE-based unmixing algorithms usually ignore the spatial information of HSIs.To solve this problem,a hypergraph regularized deep autoencoder(HGAE)is proposed for unmixing.Firstly,the traditional AE architecture is specifically improved as an unsupervised unmixing framework.Secondly,hypergraph learning is employed to reformulate the loss function,which facilitates the expression of high-order similarity among locally neighboring pixels and promotes the consistency of their abundances.Moreover,L_(1/2)norm is further used to enhance abundances sparsity.Finally,the experiments on simulated data,real hyperspectral remote sensing images,and textile cloth images are used to verify that the proposed method can perform better than several state-of-the-art unmixing algorithms.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.