In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space wit...The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.展开更多
Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopt...Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopted. According to the characteristics of the human visual system (HVS), the scanning mode and the method to determine the threshold of algorithm are changed to improve the quality of reconstruction image. On the question of repeating scan of SPIHT algorithm, using maximum list thought, greatly reduce the computation and save operating time. The experimental results have proved that the improved algorithm of image decoding time and the quality of reconstruction images are better than the original algorithm , especially in the case of low bit rate.展开更多
In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the ...In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.展开更多
Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some are...Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some areas of content.Thus,we proposed a quantization-based image watermarking in the dual tree complex wavelet domain.We took advantages of the dual tree complex wavelets (perfect reconstruction,approximate shift invariance,and directional selectivity).For the case of watermark detecting,the probability of false alarm and probability of false negative were exploited and verified by simulation.Experimental results demonstrate that the proposed method is robust against JPEG compression,additive white Gaussian noise (AWGN),and some kinds of geometric attacks such as scaling,rotation,etc.展开更多
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ...A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.展开更多
The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. ...The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. The hybrid transforms are that Karhunen-Loève Transform (KLT) which decorrelates spectral data of a hyperspectral image, and the integer Discrete Wavelet Transform (DWT) which is applied to the spatial data and produces decorrelated wavelet coefficients. Our simpler transform-based coder is inspired by Shapiro’s EZW algorithm, but encodes residual values and only implements dominant pass incorporating six symbols. The proposed method will be examined on AVIRIS images and evaluated using compression ratio for both lossless and lossy compression, and signal to noise ratio (SNR) for lossy compression. Experimental results show that the proposed image compression not only is more efficient but also has better compression ratio.展开更多
The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted eff...The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.展开更多
Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to...Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.展开更多
We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous...We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous experiments were done on typical images to calculate (using Matlab) the entropy before and after wavelet transform. It was verified that, to obtain minimal entropy, a three-layer decomposition should be adopted rather than higher orders. The result achieved by using biorthogonal wavelet decomposition is better than that of the orthogonal wavelet decomposition. The results are not directly proportional to the vanishing moment, however.展开更多
In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroB...In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.展开更多
Recently, several digital watermarking techniques have been proposed for hiding data in the frequency domain of moving image files to protect their copyrights. However, in order to detect the water marking sufficientl...Recently, several digital watermarking techniques have been proposed for hiding data in the frequency domain of moving image files to protect their copyrights. However, in order to detect the water marking sufficiently after heavy compression, it is necessary to insert the watermarking with strong intensity into a moving image, and this results in visible deterioration of the moving image. We previously proposed an authentication method using a discrete wavelet transform for a digital static image file. In contrast to digital watermarking, no additional information is inserted into the original static image in the previously proposed method, and the image is authenticated by features extracted by the wavelet transform and characteristic coding. In the present study, we developed an authentication method for a moving image by using the previously proposed method for astatic image and a newly proposed method for selecting several frames in the moving image. No additional information is inserted into the original moving image by the newly proposed method or into the original static image by the previously proposed method. The experimental results show that the proposed method has a high tolerance of authentication to both compressions and vicious attacks.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimen...The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).展开更多
A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction ...A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction methods estimate high-frequency wavelet coefficients of the original image based on the available low-frequency wavelet coefficients, so that the original image can be reconstructed by using the proposed prediction method. To further improve the reconstruction performance, we use polynomial curve fitting to build relationships between actual high-frequency wavelet coefficients and estimated high-frequency wavelet coefficients. Results of the proposed prediction algorithm for different wavelet transforms are compared to show the proposed prediction algorithm outperforms other methods.展开更多
Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing....Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.展开更多
A new fractal image compression algorithm based on high frequency energy (HFE) partitioning andmatched domain block searching is presented to code synthetic aperture radar (SAR) imagery. In the hybridcoding algorithm,...A new fractal image compression algorithm based on high frequency energy (HFE) partitioning andmatched domain block searching is presented to code synthetic aperture radar (SAR) imagery. In the hybridcoding algorithm, the original SAR image is decomposed to low frequency components and high frequencycomponents by wavelet transform (WT). Then the coder uses HFE of block to partition and searchthe matched domain block for each range block to code the low frequency components. For the high frequencycomponents, a modified embedded zero-tree wavelet coding algorithm is applied. Experiment resultsshow that the proposed coder obtains about 0. 3dB gain when compared to the traditional fractal coderbased on the quadtree partition. Moreover, the subjective visual quality of the reconstructed SAR imageof the proposed coder outperforms that of the traditional fractal coders in the same compression ratio(CR).展开更多
In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
Motivated by wavelet transform, this paper presents a pyramid linear prediction coding (PLPC) algorithmfor digitial images.The algorithm otltpots the rough colltour of an image and a prediction ermr sequence. In contr...Motivated by wavelet transform, this paper presents a pyramid linear prediction coding (PLPC) algorithmfor digitial images.The algorithm otltpots the rough colltour of an image and a prediction ermr sequence. In contrastto the conventional linear prediction method, PLPC exhibits very little sensitivity to channel ermrs and provides amore efficient compression performance. The results of simulations with Lena 512 X 512 and bitrates ranging from0.17 to 3.2 (lossless)bits/pixel are given to show that the PLPC method is very suitable for the human visualperception.展开更多
To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of...To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of medical image lossless compression is then briefly described.After analyzing and implementing differential plus code modulation(DPCM)in lossless compression,a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed.The analysis and simulation results show that this new method is simpler and useful.Moreover,it has high compression ratio in medical image lossless compression.展开更多
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.
文摘Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopted. According to the characteristics of the human visual system (HVS), the scanning mode and the method to determine the threshold of algorithm are changed to improve the quality of reconstruction image. On the question of repeating scan of SPIHT algorithm, using maximum list thought, greatly reduce the computation and save operating time. The experimental results have proved that the improved algorithm of image decoding time and the quality of reconstruction images are better than the original algorithm , especially in the case of low bit rate.
文摘In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.
基金supported by a grant from the National High Technology Research and Development Program of China (863 Program) (No.2008AA04A107)supported by a grant from the Major Programs of Guangdong-Hongkong in the Key Domain (No.2009498B21)
文摘Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some areas of content.Thus,we proposed a quantization-based image watermarking in the dual tree complex wavelet domain.We took advantages of the dual tree complex wavelets (perfect reconstruction,approximate shift invariance,and directional selectivity).For the case of watermark detecting,the probability of false alarm and probability of false negative were exploited and verified by simulation.Experimental results demonstrate that the proposed method is robust against JPEG compression,additive white Gaussian noise (AWGN),and some kinds of geometric attacks such as scaling,rotation,etc.
文摘A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.
文摘The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. The hybrid transforms are that Karhunen-Loève Transform (KLT) which decorrelates spectral data of a hyperspectral image, and the integer Discrete Wavelet Transform (DWT) which is applied to the spatial data and produces decorrelated wavelet coefficients. Our simpler transform-based coder is inspired by Shapiro’s EZW algorithm, but encodes residual values and only implements dominant pass incorporating six symbols. The proposed method will be examined on AVIRIS images and evaluated using compression ratio for both lossless and lossy compression, and signal to noise ratio (SNR) for lossy compression. Experimental results show that the proposed image compression not only is more efficient but also has better compression ratio.
文摘The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.
文摘Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.
基金the Natural Science Foundation of China (No. 60472037).
文摘We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous experiments were done on typical images to calculate (using Matlab) the entropy before and after wavelet transform. It was verified that, to obtain minimal entropy, a three-layer decomposition should be adopted rather than higher orders. The result achieved by using biorthogonal wavelet decomposition is better than that of the orthogonal wavelet decomposition. The results are not directly proportional to the vanishing moment, however.
文摘In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.
文摘Recently, several digital watermarking techniques have been proposed for hiding data in the frequency domain of moving image files to protect their copyrights. However, in order to detect the water marking sufficiently after heavy compression, it is necessary to insert the watermarking with strong intensity into a moving image, and this results in visible deterioration of the moving image. We previously proposed an authentication method using a discrete wavelet transform for a digital static image file. In contrast to digital watermarking, no additional information is inserted into the original static image in the previously proposed method, and the image is authenticated by features extracted by the wavelet transform and characteristic coding. In the present study, we developed an authentication method for a moving image by using the previously proposed method for astatic image and a newly proposed method for selecting several frames in the moving image. No additional information is inserted into the original moving image by the newly proposed method or into the original static image by the previously proposed method. The experimental results show that the proposed method has a high tolerance of authentication to both compressions and vicious attacks.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).
文摘A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction methods estimate high-frequency wavelet coefficients of the original image based on the available low-frequency wavelet coefficients, so that the original image can be reconstructed by using the proposed prediction method. To further improve the reconstruction performance, we use polynomial curve fitting to build relationships between actual high-frequency wavelet coefficients and estimated high-frequency wavelet coefficients. Results of the proposed prediction algorithm for different wavelet transforms are compared to show the proposed prediction algorithm outperforms other methods.
文摘Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.
基金Supported by the National Natural Science Foundation of China (No. 90304003)the President Fund of GUCAS (No. O85101HM03).
文摘A new fractal image compression algorithm based on high frequency energy (HFE) partitioning andmatched domain block searching is presented to code synthetic aperture radar (SAR) imagery. In the hybridcoding algorithm, the original SAR image is decomposed to low frequency components and high frequencycomponents by wavelet transform (WT). Then the coder uses HFE of block to partition and searchthe matched domain block for each range block to code the low frequency components. For the high frequencycomponents, a modified embedded zero-tree wavelet coding algorithm is applied. Experiment resultsshow that the proposed coder obtains about 0. 3dB gain when compared to the traditional fractal coderbased on the quadtree partition. Moreover, the subjective visual quality of the reconstructed SAR imageof the proposed coder outperforms that of the traditional fractal coders in the same compression ratio(CR).
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
文摘Motivated by wavelet transform, this paper presents a pyramid linear prediction coding (PLPC) algorithmfor digitial images.The algorithm otltpots the rough colltour of an image and a prediction ermr sequence. In contrastto the conventional linear prediction method, PLPC exhibits very little sensitivity to channel ermrs and provides amore efficient compression performance. The results of simulations with Lena 512 X 512 and bitrates ranging from0.17 to 3.2 (lossless)bits/pixel are given to show that the PLPC method is very suitable for the human visualperception.
基金supported by the National Natural Science Foundation of China (Grant No.60475036).
文摘To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of medical image lossless compression is then briefly described.After analyzing and implementing differential plus code modulation(DPCM)in lossless compression,a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed.The analysis and simulation results show that this new method is simpler and useful.Moreover,it has high compression ratio in medical image lossless compression.