To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical mode...To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.展开更多
In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer...In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer, Coiflet 2 and Jpeg2000 wavelet based methods are used as the wavelet based methods. Hausdorf distance based metric (HDM) is proposed and used for the comparison of the two compression methods instead of model based matching techniques and correspondence-based matching techniques, because there is no pairing of points in the two sets being compared. In addition entropy based metric (EM) or peak signal to noise ration based metric (PSNRM) cannot be used to compare the two schemes as the seam carving tends to deform the objects. The wavelet compressed images with different compression percentages were analyzed with HDM and EM and it was observed that HDM follows the EM/PSNRM for wavelet based compression methods. Then HDM is used to compare the wavelet and seam carved images for different compression percentages. The initial results showed that HDM is better metric for comparing wavelet based and seam carved images.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied....Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data.展开更多
In this paper the embedded zerotree wavelet (EZW) method and Huffman coding are proposed to compress infrared (IR) spectra. We found that this technique is much better than others in terms of efficiently coding wavele...In this paper the embedded zerotree wavelet (EZW) method and Huffman coding are proposed to compress infrared (IR) spectra. We found that this technique is much better than others in terms of efficiently coding wavelet coefficients because the zerotree quantization is an effective way of exploiting the self-similarities of wavelet coefficients at various resolutions.展开更多
A new adaptive Packet algorithm based on Discrete Cosine harmonic wavelet transform (DCHWT), (DCAHWP) has been proposed. This is realized by the Discrete Cosine Harmonic Wavelet transform (DCHTWT) which exploits the g...A new adaptive Packet algorithm based on Discrete Cosine harmonic wavelet transform (DCHWT), (DCAHWP) has been proposed. This is realized by the Discrete Cosine Harmonic Wavelet transform (DCHTWT) which exploits the good properties of DCT viz., energy compaction (low leakage), frequency resolution and computational simplicity due its real nature, compared to those of DFT and its harmonic wavelet version. Hence the proposed wavelet packet is advantageous both in terms of performance and computational efficiency compared to those of existing DFT harmonic wavelet packet. Further, the new DCAHWP also enjoys the desirable properties of a Harmonic wavelet transform over the time domain WT, viz., built in decimation without any explicit antialiasing filtering and easy interpolation by mere concatenation of different scales in frequency (DCT) domain with out any image rejection filter and with out laborious delay compensation required. Further, the compression by the proposed DCAHWP is much better compared to that by adaptive WP based on Daubechies-2 wavelet (DBAWP). For a compression factor (CF) of 1/8, the ratio of the percentage error energy by proposed DCAHWP to that by DBAWP is about 1/8 and 1/5 for considered 1-D signal and speech signal, respectively. Its compression performance is better than that of DCHWT, both for 1-D and 2-D signals. The improvement is more significant for signals with abrupt changes or images with rapid variations (textures). For compression factor of 1/8, the ratio of the percentage error energy by DCAHWP to that by DCHWT, is about 1/3 and 1/2, for the considered 1-D signal and speech signal, respectively. This factor for an image considered is 2/3 and in particular for a textural image it is 1/5.展开更多
When an image, which is decomposed by bi-orthogonal wavelet bases, is reconstructed, some information will be lost at the four edges of the image. At the same time, artificial discontinuities will be introduced. We us...When an image, which is decomposed by bi-orthogonal wavelet bases, is reconstructed, some information will be lost at the four edges of the image. At the same time, artificial discontinuities will be introduced. We use a method called symmetric extension to solve the problem. We only consider the case of the two-band filter banks, and the results can be applied to M-band filter banks. There are only two types of symmetric extension in analysis phrase, namely the whole-sample symmetry (WS), the half-sample symmetry (HS), while there are four types of symmetric extension in synthesis phrase, namely the WS, HS, the whole-sample anti-symmetry (WA), and the half-sample anti-symmetry (HA) respectively. We can select the exact type according to the image length and the filter length, and we will show how to do these. The image can be perfectly reconstructed without any edge effects in this way. Finally, simulation results are reported. Key words edge effect - image compression - wavelet - biorthogonal bases - symmetric extension CLC number TP 37 Foundation item: Supported by the National 863 Project (20021111901010)Biography: Yu Sheng-sheng (1944-), male, Professor, research direction: multimedia information processing, SAN.展开更多
The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear...The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear morphological predictor follows the update adaptive lifting to result in fewer large wavelet coefficients near edges for reducing coding. The nonlinear adaptive wavelet transforms can also allow perfect reconstruction without any overhead cost. Experiment results are given to show lower entropy of the adaptive transformed images than those of the non-adaptive case and great applicable potentiality in lossless image compresslon.展开更多
We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is gen...We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is generated, which has the similar features with the corresponding biorthogonal wavelet transform. Experimental results of the method based on integer wavelet transform are given to show better performance and great applicable potentiality in medical image compression.展开更多
In this paper a square wavelet thresholding method is proposed and evaluated as compared to the other classical wavelet thresholding methods (like soft and hard). The main advantage of this work is to design and imple...In this paper a square wavelet thresholding method is proposed and evaluated as compared to the other classical wavelet thresholding methods (like soft and hard). The main advantage of this work is to design and implement a new wavelet thresholding method and evaluate it against other classical wavelet thresholding methods and hence search for the optimal wavelet mother function among the wide families with a suitable level of decomposition and followed by a novel thresholding method among the existing methods. This optimized method will be used to shrink the wavelet coefficients and yield an adequate compressed pressure signal prior to transmit it. While a comparison evaluation analysis is established, A new proposed procedure is used to compress a synthetic signal and obtain the optimal results through minimization the signal memory size and its transmission bandwidth. There are different performance indices to establish the comparison and evaluation process for signal compression;but the most well-known measuring scores are: NMSE, ESNR, and PDR. The obtained results showed the dominant of the square wavelet thresholding method against other methods using different measuring scores and hence the conclusion by the way for adopting this proposed novel wavelet thresholding method for 1D signal compression in future researches.展开更多
The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins includ...The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.展开更多
In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the ...In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.展开更多
In this paper, a new mesh based algorithm is applied for motion estimation and compensation in the wavelet domain. The first major contribution of this work is the introduction of a new active mesh based method for mo...In this paper, a new mesh based algorithm is applied for motion estimation and compensation in the wavelet domain. The first major contribution of this work is the introduction of a new active mesh based method for motion estimation and compensation. The proposed algorithm is based on the mesh energy minimization with novel sets of energy functions. The proposed energy functions have appropriate features, which improve the accuracy of motion estimation and compensation algorithm. We employ the proposed motion estimation algorithm in two different manners for video compression. In the first approach, the proposed algorithm is employed for motion estimation of consecutive frames. In the second approach, the algorithm is applied for motion estimation and compensation in the wavelet sub-bands. The experimental results reveal that the incorporation of active mesh based motion-compensated temporal filtering into wavelet sub-bands significantly improves the distortion performance rate of the video compression. We also use a new wavelet coder for the coding of the 3D volume of coefficients based on the retained energy criteria. This coder gives the maximum retained energy in all sub-bands. The proposed algorithm was tested with some video sequences and the results showed that the use of the proposed active mesh method for motion compensation and its implementation in sub-bands yields significant improvement in PSNR performance.展开更多
Based on explanation of wavelet fractal compression method, the significance of introducing wavelet decomposition into conventional fractal compression method is deeply investigated from the point of theoretical and p...Based on explanation of wavelet fractal compression method, the significance of introducing wavelet decomposition into conventional fractal compression method is deeply investigated from the point of theoretical and practical view. The result of study can be regarded as valuable guidelines for taking advantages of wavelet transform to develop more effective image compression algorithm.展开更多
A sparsifying transform for use in Compressed Sensing (CS) is a vital piece of image reconstruction for Magnetic Resonance Imaging (MRI). Previously, Translation Invariant Wavelet Transforms (TIWT) have been shown to ...A sparsifying transform for use in Compressed Sensing (CS) is a vital piece of image reconstruction for Magnetic Resonance Imaging (MRI). Previously, Translation Invariant Wavelet Transforms (TIWT) have been shown to perform exceedingly well in CS by reducing repetitive line pattern image artifacts that may be observed when using orthogonal wavelets. To further establish its validity as a good sparsifying transform, the TIWT is comprehensively investigated and compared with Total Variation (TV), using six under-sampling patterns through simulation. Both trajectory and random mask based under-sampling of MRI data are reconstructed to demonstrate a comprehensive coverage of tests. Notably, the TIWT in CS reconstruction performs well for all varieties of under-sampling patterns tested, even for cases where TV does not improve the mean squared error. This improved Image Quality (IQ) gives confidence in applying this transform to more CS applications which will contribute to an even greater speed-up of a CS MRI scan. High vs low resolution time of flight MRI CS re-constructions are also analyzed showing how partial Fourier acquisitions must be carefully addressed in CS to prevent loss of IQ. In the spirit of reproducible research, novel software is introduced here as FastTestCS. It is a helpful tool to quickly develop and perform tests with many CS customizations. Easy integration and testing for the TIWT and TV minimization are exemplified. Simulations of 3D MRI datasets are shown to be efficiently distributed as a scalable solution for large studies. Comparisons in reconstruction computation time are made between the Wavelab toolbox and Gnu Scientific Library in FastTestCS that show a significant time savings factor of 60×. The addition of FastTestCS is proven to be a fast, flexible, portable and reproducible simulation aid for CS research.展开更多
The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted eff...The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.展开更多
文摘To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.
文摘In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer, Coiflet 2 and Jpeg2000 wavelet based methods are used as the wavelet based methods. Hausdorf distance based metric (HDM) is proposed and used for the comparison of the two compression methods instead of model based matching techniques and correspondence-based matching techniques, because there is no pairing of points in the two sets being compared. In addition entropy based metric (EM) or peak signal to noise ration based metric (PSNRM) cannot be used to compare the two schemes as the seam carving tends to deform the objects. The wavelet compressed images with different compression percentages were analyzed with HDM and EM and it was observed that HDM follows the EM/PSNRM for wavelet based compression methods. Then HDM is used to compare the wavelet and seam carved images for different compression percentages. The initial results showed that HDM is better metric for comparing wavelet based and seam carved images.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data.
基金supported by the National Natural Science Foundmion of China(No.29877016).
文摘In this paper the embedded zerotree wavelet (EZW) method and Huffman coding are proposed to compress infrared (IR) spectra. We found that this technique is much better than others in terms of efficiently coding wavelet coefficients because the zerotree quantization is an effective way of exploiting the self-similarities of wavelet coefficients at various resolutions.
文摘A new adaptive Packet algorithm based on Discrete Cosine harmonic wavelet transform (DCHWT), (DCAHWP) has been proposed. This is realized by the Discrete Cosine Harmonic Wavelet transform (DCHTWT) which exploits the good properties of DCT viz., energy compaction (low leakage), frequency resolution and computational simplicity due its real nature, compared to those of DFT and its harmonic wavelet version. Hence the proposed wavelet packet is advantageous both in terms of performance and computational efficiency compared to those of existing DFT harmonic wavelet packet. Further, the new DCAHWP also enjoys the desirable properties of a Harmonic wavelet transform over the time domain WT, viz., built in decimation without any explicit antialiasing filtering and easy interpolation by mere concatenation of different scales in frequency (DCT) domain with out any image rejection filter and with out laborious delay compensation required. Further, the compression by the proposed DCAHWP is much better compared to that by adaptive WP based on Daubechies-2 wavelet (DBAWP). For a compression factor (CF) of 1/8, the ratio of the percentage error energy by proposed DCAHWP to that by DBAWP is about 1/8 and 1/5 for considered 1-D signal and speech signal, respectively. Its compression performance is better than that of DCHWT, both for 1-D and 2-D signals. The improvement is more significant for signals with abrupt changes or images with rapid variations (textures). For compression factor of 1/8, the ratio of the percentage error energy by DCAHWP to that by DCHWT, is about 1/3 and 1/2, for the considered 1-D signal and speech signal, respectively. This factor for an image considered is 2/3 and in particular for a textural image it is 1/5.
文摘When an image, which is decomposed by bi-orthogonal wavelet bases, is reconstructed, some information will be lost at the four edges of the image. At the same time, artificial discontinuities will be introduced. We use a method called symmetric extension to solve the problem. We only consider the case of the two-band filter banks, and the results can be applied to M-band filter banks. There are only two types of symmetric extension in analysis phrase, namely the whole-sample symmetry (WS), the half-sample symmetry (HS), while there are four types of symmetric extension in synthesis phrase, namely the WS, HS, the whole-sample anti-symmetry (WA), and the half-sample anti-symmetry (HA) respectively. We can select the exact type according to the image length and the filter length, and we will show how to do these. The image can be perfectly reconstructed without any edge effects in this way. Finally, simulation results are reported. Key words edge effect - image compression - wavelet - biorthogonal bases - symmetric extension CLC number TP 37 Foundation item: Supported by the National 863 Project (20021111901010)Biography: Yu Sheng-sheng (1944-), male, Professor, research direction: multimedia information processing, SAN.
基金Supported by the National Natural Science Foundation of China (69983005)
文摘The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear morphological predictor follows the update adaptive lifting to result in fewer large wavelet coefficients near edges for reducing coding. The nonlinear adaptive wavelet transforms can also allow perfect reconstruction without any overhead cost. Experiment results are given to show lower entropy of the adaptive transformed images than those of the non-adaptive case and great applicable potentiality in lossless image compresslon.
文摘We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is generated, which has the similar features with the corresponding biorthogonal wavelet transform. Experimental results of the method based on integer wavelet transform are given to show better performance and great applicable potentiality in medical image compression.
文摘In this paper a square wavelet thresholding method is proposed and evaluated as compared to the other classical wavelet thresholding methods (like soft and hard). The main advantage of this work is to design and implement a new wavelet thresholding method and evaluate it against other classical wavelet thresholding methods and hence search for the optimal wavelet mother function among the wide families with a suitable level of decomposition and followed by a novel thresholding method among the existing methods. This optimized method will be used to shrink the wavelet coefficients and yield an adequate compressed pressure signal prior to transmit it. While a comparison evaluation analysis is established, A new proposed procedure is used to compress a synthetic signal and obtain the optimal results through minimization the signal memory size and its transmission bandwidth. There are different performance indices to establish the comparison and evaluation process for signal compression;but the most well-known measuring scores are: NMSE, ESNR, and PDR. The obtained results showed the dominant of the square wavelet thresholding method against other methods using different measuring scores and hence the conclusion by the way for adopting this proposed novel wavelet thresholding method for 1D signal compression in future researches.
文摘The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.
文摘In this paper, a new method of combination single layer wavelet transform and compressive sensing is proposed for image fusion. In which only measured the high-pass wavelet coefficients of the image but preserved the low-pass wavelet coefficient. Then, fuse the low-pass wavelet coefficients and the measurements of high-pass wavelet coefficient with different schemes. For the reconstruction, by using the minimization of total variation algorithm (TV), high-pass wavelet coefficients could be recovered by the fused measurements. Finally, the fused image could be reconstructed by the inverse wavelet transform. The experiments show the proposed method provides promising fusion performance with a low computational complexity.
文摘In this paper, a new mesh based algorithm is applied for motion estimation and compensation in the wavelet domain. The first major contribution of this work is the introduction of a new active mesh based method for motion estimation and compensation. The proposed algorithm is based on the mesh energy minimization with novel sets of energy functions. The proposed energy functions have appropriate features, which improve the accuracy of motion estimation and compensation algorithm. We employ the proposed motion estimation algorithm in two different manners for video compression. In the first approach, the proposed algorithm is employed for motion estimation of consecutive frames. In the second approach, the algorithm is applied for motion estimation and compensation in the wavelet sub-bands. The experimental results reveal that the incorporation of active mesh based motion-compensated temporal filtering into wavelet sub-bands significantly improves the distortion performance rate of the video compression. We also use a new wavelet coder for the coding of the 3D volume of coefficients based on the retained energy criteria. This coder gives the maximum retained energy in all sub-bands. The proposed algorithm was tested with some video sequences and the results showed that the use of the proposed active mesh method for motion compensation and its implementation in sub-bands yields significant improvement in PSNR performance.
基金This project is supported by the National Natural Science Foundation of China (No. 69774030) Foundation for University Key Teacher by the Ministry of Education.
文摘Based on explanation of wavelet fractal compression method, the significance of introducing wavelet decomposition into conventional fractal compression method is deeply investigated from the point of theoretical and practical view. The result of study can be regarded as valuable guidelines for taking advantages of wavelet transform to develop more effective image compression algorithm.
文摘A sparsifying transform for use in Compressed Sensing (CS) is a vital piece of image reconstruction for Magnetic Resonance Imaging (MRI). Previously, Translation Invariant Wavelet Transforms (TIWT) have been shown to perform exceedingly well in CS by reducing repetitive line pattern image artifacts that may be observed when using orthogonal wavelets. To further establish its validity as a good sparsifying transform, the TIWT is comprehensively investigated and compared with Total Variation (TV), using six under-sampling patterns through simulation. Both trajectory and random mask based under-sampling of MRI data are reconstructed to demonstrate a comprehensive coverage of tests. Notably, the TIWT in CS reconstruction performs well for all varieties of under-sampling patterns tested, even for cases where TV does not improve the mean squared error. This improved Image Quality (IQ) gives confidence in applying this transform to more CS applications which will contribute to an even greater speed-up of a CS MRI scan. High vs low resolution time of flight MRI CS re-constructions are also analyzed showing how partial Fourier acquisitions must be carefully addressed in CS to prevent loss of IQ. In the spirit of reproducible research, novel software is introduced here as FastTestCS. It is a helpful tool to quickly develop and perform tests with many CS customizations. Easy integration and testing for the TIWT and TV minimization are exemplified. Simulations of 3D MRI datasets are shown to be efficiently distributed as a scalable solution for large studies. Comparisons in reconstruction computation time are made between the Wavelab toolbox and Gnu Scientific Library in FastTestCS that show a significant time savings factor of 60×. The addition of FastTestCS is proven to be a fast, flexible, portable and reproducible simulation aid for CS research.
文摘The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.