To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical mode...To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.展开更多
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ...A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.展开更多
The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. ...The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. The hybrid transforms are that Karhunen-Loève Transform (KLT) which decorrelates spectral data of a hyperspectral image, and the integer Discrete Wavelet Transform (DWT) which is applied to the spatial data and produces decorrelated wavelet coefficients. Our simpler transform-based coder is inspired by Shapiro’s EZW algorithm, but encodes residual values and only implements dominant pass incorporating six symbols. The proposed method will be examined on AVIRIS images and evaluated using compression ratio for both lossless and lossy compression, and signal to noise ratio (SNR) for lossy compression. Experimental results show that the proposed image compression not only is more efficient but also has better compression ratio.展开更多
This paper presents a novel method utilizing wavelets with particle swarm optimization(PSO)for medical image compression.Our method utilizes PSO to overcome the wavelets discontinuity which occurs when compressing ima...This paper presents a novel method utilizing wavelets with particle swarm optimization(PSO)for medical image compression.Our method utilizes PSO to overcome the wavelets discontinuity which occurs when compressing images using thresholding.It transfers images into subband details and approximations using a modified Haar wavelet(MHW),and then applies a threshold.PSO is applied for selecting a particle assigned to the threshold values for the subbands.Nine positions assigned to particles values are used to represent population.Every particle updates its position depending on the global best position(gbest)(for all details subband)and local best position(pbest)(for a subband).The fitness value is developed to terminate PSO when the difference between two local best(pbest)successors is smaller than a prescribe value.The experiments are applied on five different medical image types,i.e.,MRI,CT,and X-ray.Results show that the proposed algorithm can be more preferably to compress medical images than other existing wavelets techniques from peak signal to noise ratio(PSNR)and compression ratio(CR)points of views.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space wit...The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.展开更多
Currently,in multimedia and image processing technologies, implementing special kinds of image manipulation operations by dealing directly with the compressed image is a work worthy to be concerned with. Theoretical a...Currently,in multimedia and image processing technologies, implementing special kinds of image manipulation operations by dealing directly with the compressed image is a work worthy to be concerned with. Theoretical analysis and experiment haVe indicated that some kinds of image processing works can be done very well by dealing with compressed image. In Ans paper, we give some efficient image manipulation operation algorithms operating on the compressed image data. These algorithms have advantages in computing complexity, storage space retirement and image quality.展开更多
An effective processing method for biomedical images and the Fuzzy C-mean (FCM) algorithm based on the wavelet transform are investigated.By using hierarchical wavelet decomposition, an original image could be decompo...An effective processing method for biomedical images and the Fuzzy C-mean (FCM) algorithm based on the wavelet transform are investigated.By using hierarchical wavelet decomposition, an original image could be decomposed into one lower image and several detail images. The segmentation started at the lowest resolution with the FCM clustering algorithm and the texture feature extracted from various sub-bands. With the improvement of the FCM algorithm, FCM alternation frequency was decreased and the accuracy of segmentation was advanced.展开更多
Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy d...Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy defined by certain criterion. Then we derive the adaptive wavelet decomposition tree (AWDT) and the way of adjustable compression ratio. According to the feature of AWDT, this paper also deals with the strategies which are used to handle different sub-images in the procedure of quantification and coding of the wavelet coefficients. Through experiments, not only the algorithm in the paper can adapt to various images, but also the quality of recovered image is improved though compression ratio is higher and adjustable. When their compression ratios are near, the quality of subjective vision and PSNR of the algorithm are better than those of JPEG algorithm.展开更多
Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopt...Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopted. According to the characteristics of the human visual system (HVS), the scanning mode and the method to determine the threshold of algorithm are changed to improve the quality of reconstruction image. On the question of repeating scan of SPIHT algorithm, using maximum list thought, greatly reduce the computation and save operating time. The experimental results have proved that the improved algorithm of image decoding time and the quality of reconstruction images are better than the original algorithm , especially in the case of low bit rate.展开更多
In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroB...In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.展开更多
In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the ...In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.展开更多
Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some are...Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some areas of content.Thus,we proposed a quantization-based image watermarking in the dual tree complex wavelet domain.We took advantages of the dual tree complex wavelets (perfect reconstruction,approximate shift invariance,and directional selectivity).For the case of watermark detecting,the probability of false alarm and probability of false negative were exploited and verified by simulation.Experimental results demonstrate that the proposed method is robust against JPEG compression,additive white Gaussian noise (AWGN),and some kinds of geometric attacks such as scaling,rotation,etc.展开更多
The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted eff...The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.展开更多
Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to...Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.展开更多
To preserve the original signal as much as possible and filter random noises as many as possible in image processing,a threshold optimization-based adaptive template filtering algorithm was proposed.Unlike conventiona...To preserve the original signal as much as possible and filter random noises as many as possible in image processing,a threshold optimization-based adaptive template filtering algorithm was proposed.Unlike conventional filters whose template shapes and coefficients were fixed,multi-templates were defined and the right template for each pixel could be matched adaptively based on local image characteristics in the proposed method.The superiority of this method was verified by former results concerning the matching experiment of actual image with the comparison of conventional filtering methods.The adaptive search ability of immune genetic algorithm with the elitist selection and elitist crossover(IGAE) was used to optimize threshold t of the transformation function,and then combined with wavelet transformation to estimate noise variance.Multi-experiments were performed to test the validity of IGAE.The results show that the filtered result of t obtained by IGAE is superior to that of t obtained by other methods,IGAE has a faster convergence speed and a higher computational efficiency compared with the canonical genetic algorithm with the elitism and the immune algorithm with the information entropy and elitism by multi-experiments.展开更多
In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical ima...In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimen...The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).展开更多
A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction ...A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction methods estimate high-frequency wavelet coefficients of the original image based on the available low-frequency wavelet coefficients, so that the original image can be reconstructed by using the proposed prediction method. To further improve the reconstruction performance, we use polynomial curve fitting to build relationships between actual high-frequency wavelet coefficients and estimated high-frequency wavelet coefficients. Results of the proposed prediction algorithm for different wavelet transforms are compared to show the proposed prediction algorithm outperforms other methods.展开更多
文摘To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.
文摘A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.
文摘The paper describes an efficient lossy and lossless three dimensional (3D) image compression of hyperspectral images. The method adopts the 3D spatial-spectral hybrid transform and the proposed transform-based coder. The hybrid transforms are that Karhunen-Loève Transform (KLT) which decorrelates spectral data of a hyperspectral image, and the integer Discrete Wavelet Transform (DWT) which is applied to the spatial data and produces decorrelated wavelet coefficients. Our simpler transform-based coder is inspired by Shapiro’s EZW algorithm, but encodes residual values and only implements dominant pass incorporating six symbols. The proposed method will be examined on AVIRIS images and evaluated using compression ratio for both lossless and lossy compression, and signal to noise ratio (SNR) for lossy compression. Experimental results show that the proposed image compression not only is more efficient but also has better compression ratio.
基金funded by the University of Jeddah,Saudi Arabia,under Grant No.UJ-20-043-DR。
文摘This paper presents a novel method utilizing wavelets with particle swarm optimization(PSO)for medical image compression.Our method utilizes PSO to overcome the wavelets discontinuity which occurs when compressing images using thresholding.It transfers images into subband details and approximations using a modified Haar wavelet(MHW),and then applies a threshold.PSO is applied for selecting a particle assigned to the threshold values for the subbands.Nine positions assigned to particles values are used to represent population.Every particle updates its position depending on the global best position(gbest)(for all details subband)and local best position(pbest)(for a subband).The fitness value is developed to terminate PSO when the difference between two local best(pbest)successors is smaller than a prescribe value.The experiments are applied on five different medical image types,i.e.,MRI,CT,and X-ray.Results show that the proposed algorithm can be more preferably to compress medical images than other existing wavelets techniques from peak signal to noise ratio(PSNR)and compression ratio(CR)points of views.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.
文摘Currently,in multimedia and image processing technologies, implementing special kinds of image manipulation operations by dealing directly with the compressed image is a work worthy to be concerned with. Theoretical analysis and experiment haVe indicated that some kinds of image processing works can be done very well by dealing with compressed image. In Ans paper, we give some efficient image manipulation operation algorithms operating on the compressed image data. These algorithms have advantages in computing complexity, storage space retirement and image quality.
文摘An effective processing method for biomedical images and the Fuzzy C-mean (FCM) algorithm based on the wavelet transform are investigated.By using hierarchical wavelet decomposition, an original image could be decomposed into one lower image and several detail images. The segmentation started at the lowest resolution with the FCM clustering algorithm and the texture feature extracted from various sub-bands. With the improvement of the FCM algorithm, FCM alternation frequency was decreased and the accuracy of segmentation was advanced.
文摘Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy defined by certain criterion. Then we derive the adaptive wavelet decomposition tree (AWDT) and the way of adjustable compression ratio. According to the feature of AWDT, this paper also deals with the strategies which are used to handle different sub-images in the procedure of quantification and coding of the wavelet coefficients. Through experiments, not only the algorithm in the paper can adapt to various images, but also the quality of recovered image is improved though compression ratio is higher and adjustable. When their compression ratios are near, the quality of subjective vision and PSNR of the algorithm are better than those of JPEG algorithm.
文摘Aiming at shortage of the SPIHT algorithm, an improved image compression algorithm is proposed, in order to overcome the shortcomings of decoding image quality and coding time, LS9/7 lifting wavelet transform is adopted. According to the characteristics of the human visual system (HVS), the scanning mode and the method to determine the threshold of algorithm are changed to improve the quality of reconstruction image. On the question of repeating scan of SPIHT algorithm, using maximum list thought, greatly reduce the computation and save operating time. The experimental results have proved that the improved algorithm of image decoding time and the quality of reconstruction images are better than the original algorithm , especially in the case of low bit rate.
文摘In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.
文摘In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.
基金supported by a grant from the National High Technology Research and Development Program of China (863 Program) (No.2008AA04A107)supported by a grant from the Major Programs of Guangdong-Hongkong in the Key Domain (No.2009498B21)
文摘Conventional quantization index modulation (QIM) watermarking uses the fixed quantization step size for the host signal.This scheme is not robust against geometric distortions and may lead to poor fidelity in some areas of content.Thus,we proposed a quantization-based image watermarking in the dual tree complex wavelet domain.We took advantages of the dual tree complex wavelets (perfect reconstruction,approximate shift invariance,and directional selectivity).For the case of watermark detecting,the probability of false alarm and probability of false negative were exploited and verified by simulation.Experimental results demonstrate that the proposed method is robust against JPEG compression,additive white Gaussian noise (AWGN),and some kinds of geometric attacks such as scaling,rotation,etc.
文摘The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.
文摘Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.
基金Project(20040533035) supported by the National Research Foundation for the Doctoral Program of Higher Education of ChinaProject (60874070) supported by the National Natural Science Foundation of China
文摘To preserve the original signal as much as possible and filter random noises as many as possible in image processing,a threshold optimization-based adaptive template filtering algorithm was proposed.Unlike conventional filters whose template shapes and coefficients were fixed,multi-templates were defined and the right template for each pixel could be matched adaptively based on local image characteristics in the proposed method.The superiority of this method was verified by former results concerning the matching experiment of actual image with the comparison of conventional filtering methods.The adaptive search ability of immune genetic algorithm with the elitist selection and elitist crossover(IGAE) was used to optimize threshold t of the transformation function,and then combined with wavelet transformation to estimate noise variance.Multi-experiments were performed to test the validity of IGAE.The results show that the filtered result of t obtained by IGAE is superior to that of t obtained by other methods,IGAE has a faster convergence speed and a higher computational efficiency compared with the canonical genetic algorithm with the elitism and the immune algorithm with the information entropy and elitism by multi-experiments.
基金The National High Technology Research and Development Program of China(‘863’Program)grant number:2007AA02Z4A9+1 种基金National Natural Science Foundation of Chinagrant number:30671997
文摘In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).
文摘A floating-point wavelet-based and an integer wavelet-based image interpolations in lifting structures and polynomial curve fitting for image resolution enhancement are proposed in this paper. The proposed prediction methods estimate high-frequency wavelet coefficients of the original image based on the available low-frequency wavelet coefficients, so that the original image can be reconstructed by using the proposed prediction method. To further improve the reconstruction performance, we use polynomial curve fitting to build relationships between actual high-frequency wavelet coefficients and estimated high-frequency wavelet coefficients. Results of the proposed prediction algorithm for different wavelet transforms are compared to show the proposed prediction algorithm outperforms other methods.