The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space wit...The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.展开更多
In Order to reduce the noise in the images and the physical storage, the wavelet-based image compression technique was applied to PIV processing in this paper. To study the effect of the wavelet bases, the standard PI...In Order to reduce the noise in the images and the physical storage, the wavelet-based image compression technique was applied to PIV processing in this paper. To study the effect of the wavelet bases, the standard PIV images were compressed by some known wavelet families, Daubechies, Coifman and Baylkin families with various compression ratios. It was found that a higher order wavelet base provided good compression performance for compressing PIV images. The error analysis of velocity field obtained indicated that the high compression ratio, even up to 64.1, can be realized without losing significant flow information in PIV processing. The wavelet compression technique of PIV was applied to the experimental images of jet flow and showed excellent performance. A reduced number of erroneous vectors can be realized by varying compression ratio. It can say that the wavelet image compression technique is very effective in PIV system.展开更多
Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to...Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.展开更多
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ...A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.展开更多
Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Tr...Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimen...The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).展开更多
The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted eff...The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.展开更多
Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa l...Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa logo at the corner or diagonal text in the background. However, this formof visible watermarking is not suitable for a large class of applications. In allsuch cases, a hidden watermark is embedded inside the original image as proofof ownership. A large number of techniques and algorithms are proposedby researchers for invisible watermarking. In this paper, we focus on issuesthat are critical for security aspects in the most common domains like digitalphotography copyrighting, online image stores, etc. The requirements of thisclass of application include robustness (resistance to attack), blindness (directextraction without original image), high embedding capacity, high Peak Signalto Noise Ratio (PSNR), and high Structural Similarity Matrix (SSIM). Mostof these requirements are conflicting, which means that an attempt to maximizeone requirement harms the other. In this paper, a blind type of imagewatermarking scheme is proposed using Lifting Wavelet Transform (LWT)as the baseline. Using this technique, custom binary watermarks in the formof a binary string can be embedded. Hu’s Invariant moments’ coefficientsare used as a key to extract the watermark. A Stochastic variant of theFirefly algorithm (FA) is used for the optimization of the technique. Undera prespecified size of embedding data, high PSNR and SSIM are obtainedusing the Stochastic Gradient variant of the Firefly technique. The simulationis done using Matrix Laboratory (MATLAB) tool and it is shown that theproposed technique outperforms the benchmark techniques of watermarkingconsidering PSNR and SSIM as quality metrics.展开更多
This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code M...This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.展开更多
文摘The amount of image data generated in multimedia applications is ever increasing. The image compression plays vital role in multimedia applications. The ultimate aim of image compression is to reduce storage space without degrading image quality. Compression is required whenever the data handled is huge they may be required to sent or transmitted and also stored. The New Edge Directed Interpolation (NEDI)-based lifting Discrete Wavelet Transfrom (DWT) scheme with modified Set Partitioning In Hierarchical Trees (MSPIHT) algorithm is proposed in this paper. The NEDI algorithm gives good visual quality image particularly at edges. The main objective of this paper is to be preserving the edges while performing image compression which is a challenging task. The NEDI with lifting DWT has achieved 99.18% energy level in the low frequency ranges which has 1.07% higher than 5/3 Wavelet decomposition and 0.94% higher than traditional DWT. To implement this NEDI with Lifting DWT along with MSPIHT algorithm which gives higher Peak Signal to Noise Ratio (PSNR) value and minimum Mean Square Error (MSE) and hence better image quality. The experimental results proved that the proposed method gives better PSNR value (39.40 dB for rate 0.9 bpp without arithmetic coding) and minimum MSE value is 7.4.
文摘In Order to reduce the noise in the images and the physical storage, the wavelet-based image compression technique was applied to PIV processing in this paper. To study the effect of the wavelet bases, the standard PIV images were compressed by some known wavelet families, Daubechies, Coifman and Baylkin families with various compression ratios. It was found that a higher order wavelet base provided good compression performance for compressing PIV images. The error analysis of velocity field obtained indicated that the high compression ratio, even up to 64.1, can be realized without losing significant flow information in PIV processing. The wavelet compression technique of PIV was applied to the experimental images of jet flow and showed excellent performance. A reduced number of erroneous vectors can be realized by varying compression ratio. It can say that the wavelet image compression technique is very effective in PIV system.
文摘Data compression is one of the core fields of study for applications of image and video processing.The raw data to be transmitted consumes large bandwidth and requires huge storage space as a result,it is desirable to represent the information in the data with considerably fewer bits by the mean of data compression techniques,the data must be reconstituted very similarly to the initial form.In this paper,a hybrid compression based on Discrete Cosine Transform(DCT),DiscreteWavelet Transform(DWT)is used to enhance the quality of the reconstructed image.These techniques are followed by entropy encoding such as Huffman coding to give additional compression.Huffman coding is optimal prefix code because of its implementation is more simple,faster,and easier than other codes.It needs less execution time and it is the shortest average length and the measurements for analysis are based upon Compression Ratio,Mean Square Error(MSE),and Peak Signal to Noise Ratio(PSNR).We applied a hybrid algorithm on(DWT–DCT 2×2,4×4,8×8,16×16,32×32)blocks.Finally,we show that by using a hybrid(DWT–DCT)compression technique,the PSNR is reconstructed for the image by using the proposed hybrid algorithm(DWT–DCT 8×8 block)is quite high than DCT.
文摘A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.
文摘Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘The watermarking technique has been proposed as a method by hiding secret information into the image to protect the copyright of multimedia data. But most previous work focuses on the algorithms of embedding one dimensional watermarks or two dimensional binary digital watermarks. In this paper, a wavelet based method for embedding a gray level digital watermark into an image is proposed. By still image decomposition technique, a gray level digital watermark is decompounded into a series of bitplanes. By discrete wavelet transform ( DWT ), the host image is decomposed into multiresolution representations with hierarchical structure. The different bitplanes of the gray level watermark is embedded into the corresponding resolution of the decomposed host image. The experimental results show that the proposed techniques can successfully survive image processing operations and the lossy compression techniques such as Joint Photographic Experts Group (JPEG).
文摘The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.
基金funded by Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number (PNURSP2022R235)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Watermarking of digital images is required in diversified applicationsranging from medical imaging to commercial images used over the web.Usually, the copyright information is embossed over the image in the form ofa logo at the corner or diagonal text in the background. However, this formof visible watermarking is not suitable for a large class of applications. In allsuch cases, a hidden watermark is embedded inside the original image as proofof ownership. A large number of techniques and algorithms are proposedby researchers for invisible watermarking. In this paper, we focus on issuesthat are critical for security aspects in the most common domains like digitalphotography copyrighting, online image stores, etc. The requirements of thisclass of application include robustness (resistance to attack), blindness (directextraction without original image), high embedding capacity, high Peak Signalto Noise Ratio (PSNR), and high Structural Similarity Matrix (SSIM). Mostof these requirements are conflicting, which means that an attempt to maximizeone requirement harms the other. In this paper, a blind type of imagewatermarking scheme is proposed using Lifting Wavelet Transform (LWT)as the baseline. Using this technique, custom binary watermarks in the formof a binary string can be embedded. Hu’s Invariant moments’ coefficientsare used as a key to extract the watermark. A Stochastic variant of theFirefly algorithm (FA) is used for the optimization of the technique. Undera prespecified size of embedding data, high PSNR and SSIM are obtainedusing the Stochastic Gradient variant of the Firefly technique. The simulationis done using Matrix Laboratory (MATLAB) tool and it is shown that theproposed technique outperforms the benchmark techniques of watermarkingconsidering PSNR and SSIM as quality metrics.
文摘This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.