In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins includ...The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.展开更多
In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroB...In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.展开更多
In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
This paper investigated approaches to supporting effective and efficient retrieval of image based on principle component analysis. First, it extracted the image content, texture and color. Gabor wavelet transforms wer...This paper investigated approaches to supporting effective and efficient retrieval of image based on principle component analysis. First, it extracted the image content, texture and color. Gabor wavelet transforms were used to extract texture feature of the image and the average color was used to extract the color features. The principle component of the feature vector of image can be constructed. Content based image retrieval was performed by comparing the feature vector of the query image with the projection feature vector of the image database on the principle component space of the query image. By this technique, it can reduce the dimensionality of feature vector, which in turn reduce the searching time.展开更多
Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing....Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.展开更多
To achieve high-quality image compression of a floral canopy,a region of interest(ROI)mask of the wavelet domain was generated through the automatic identification of the canopy ROI and lifting the bit-plane of the RO...To achieve high-quality image compression of a floral canopy,a region of interest(ROI)mask of the wavelet domain was generated through the automatic identification of the canopy ROI and lifting the bit-plane of the ROI to obtain priority of coding for the ROI-set partitioning in hierarchical trees(ROI-SPIHT)coding.The embedded zerotree wavelet(EZW)coding was conducted for the background(BG)region of the image and a relatively more low-frequency wavelet coefficient was obtained using a relatively small amount of coding.Through the weighing factor r of the ROI coding amount,the proportion of the ROI and BG coding amount was dynamically adjusted to generate embedded,truncatable bit streams.Despite the location of truncation,the image information and ROI mask information required by the decoder can be guaranteed to achieve high-quality compression and reconstruction of the image ROI.The results indicated that under the same bit rate,the larger the r value is,the larger the peak-signal-to-noise ratio(PSNR)for the ROI reconstructed image and the smaller the PSNR for the BG reconstructed image.In the range of 0.07-1.09 bpp,the PSNR of the ROI reconstructed image was 42.65%higher on average than that of the BG reconstructed image,43.95%higher on average than that of the composite image of the ROI and BG(ALL),and 16.84%higher on average than that of the standard SPIHT reconstructed image.Additionally,the mean square error of the quality evaluation index and similarity for the ROI reconstructed image were both better than those for the BG,ALL,and standard SPIHT reconstructed images.The texture distortion of the ALL image was smaller than that of the SPIHT reconstructed image,indicating that the image compression algorithm based on the mask hybrid coding for ROI(ROI-MHC)is capable of improving the reconstruction quality of an ROI image.When the weighing factor r is a fixed value,as the proportion of ROI(a)increases,the quality of ROI image reconstruction gradually decreases.Therefore,upon the application of the ROI-MHC image compression algorithm,high-quality reconstruction of the ROI image can be achieved through dynamically configuring r according to a.Under the same bit rate,the quality of the ROI-MHC image compression is higher than that of current compression algorithms of same classes and offers promising application opportunities.展开更多
To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of...To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of medical image lossless compression is then briefly described.After analyzing and implementing differential plus code modulation(DPCM)in lossless compression,a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed.The analysis and simulation results show that this new method is simpler and useful.Moreover,it has high compression ratio in medical image lossless compression.展开更多
通过有机结合零树编码、位平面编码和算术编码,提出了一种基于零树和位平面的小波图像压缩算法ZBP(Zerotree and bit plane).ZBP不仅充分利用了零树符号之间的相关性,而且从位数据的层面上挖掘出了小波系数值之间的相关性,从而提高了算...通过有机结合零树编码、位平面编码和算术编码,提出了一种基于零树和位平面的小波图像压缩算法ZBP(Zerotree and bit plane).ZBP不仅充分利用了零树符号之间的相关性,而且从位数据的层面上挖掘出了小波系数值之间的相关性,从而提高了算术编码的性能.实验结果表明,ZBP的压缩效果优于目前已有的小波图像压缩算法.展开更多
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.
文摘In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
文摘This paper investigated approaches to supporting effective and efficient retrieval of image based on principle component analysis. First, it extracted the image content, texture and color. Gabor wavelet transforms were used to extract texture feature of the image and the average color was used to extract the color features. The principle component of the feature vector of image can be constructed. Content based image retrieval was performed by comparing the feature vector of the query image with the projection feature vector of the image database on the principle component space of the query image. By this technique, it can reduce the dimensionality of feature vector, which in turn reduce the searching time.
文摘Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.
基金support provided by the Natural Science Fundation of Jiangsu Province:Youth Fund(Grant No.BK20170727)the Fundamental Research Funds for the Central Universities(Grant No.KYGX201703)the Natural Science Fundation of Jiangsu Province:Youth Fund(Grant No.BK20150686).
文摘To achieve high-quality image compression of a floral canopy,a region of interest(ROI)mask of the wavelet domain was generated through the automatic identification of the canopy ROI and lifting the bit-plane of the ROI to obtain priority of coding for the ROI-set partitioning in hierarchical trees(ROI-SPIHT)coding.The embedded zerotree wavelet(EZW)coding was conducted for the background(BG)region of the image and a relatively more low-frequency wavelet coefficient was obtained using a relatively small amount of coding.Through the weighing factor r of the ROI coding amount,the proportion of the ROI and BG coding amount was dynamically adjusted to generate embedded,truncatable bit streams.Despite the location of truncation,the image information and ROI mask information required by the decoder can be guaranteed to achieve high-quality compression and reconstruction of the image ROI.The results indicated that under the same bit rate,the larger the r value is,the larger the peak-signal-to-noise ratio(PSNR)for the ROI reconstructed image and the smaller the PSNR for the BG reconstructed image.In the range of 0.07-1.09 bpp,the PSNR of the ROI reconstructed image was 42.65%higher on average than that of the BG reconstructed image,43.95%higher on average than that of the composite image of the ROI and BG(ALL),and 16.84%higher on average than that of the standard SPIHT reconstructed image.Additionally,the mean square error of the quality evaluation index and similarity for the ROI reconstructed image were both better than those for the BG,ALL,and standard SPIHT reconstructed images.The texture distortion of the ALL image was smaller than that of the SPIHT reconstructed image,indicating that the image compression algorithm based on the mask hybrid coding for ROI(ROI-MHC)is capable of improving the reconstruction quality of an ROI image.When the weighing factor r is a fixed value,as the proportion of ROI(a)increases,the quality of ROI image reconstruction gradually decreases.Therefore,upon the application of the ROI-MHC image compression algorithm,high-quality reconstruction of the ROI image can be achieved through dynamically configuring r according to a.Under the same bit rate,the quality of the ROI-MHC image compression is higher than that of current compression algorithms of same classes and offers promising application opportunities.
基金supported by the National Natural Science Foundation of China (Grant No.60475036).
文摘To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of medical image lossless compression is then briefly described.After analyzing and implementing differential plus code modulation(DPCM)in lossless compression,a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed.The analysis and simulation results show that this new method is simpler and useful.Moreover,it has high compression ratio in medical image lossless compression.
文摘通过有机结合零树编码、位平面编码和算术编码,提出了一种基于零树和位平面的小波图像压缩算法ZBP(Zerotree and bit plane).ZBP不仅充分利用了零树符号之间的相关性,而且从位数据的层面上挖掘出了小波系数值之间的相关性,从而提高了算术编码的性能.实验结果表明,ZBP的压缩效果优于目前已有的小波图像压缩算法.