To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical mode...To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.展开更多
The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted eff...The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.展开更多
Due to coarse quantization, block-based discrete cosine transform(BDCT) compression methods usually suffer from visible blocking artifacts at the block boundaries. A novel efficient de-blocking method in DCT domain is...Due to coarse quantization, block-based discrete cosine transform(BDCT) compression methods usually suffer from visible blocking artifacts at the block boundaries. A novel efficient de-blocking method in DCT domain is proposed. A specific criterion for edge detection is given, one-dimensional DCT is applied on each row of the adjacent blocks and the shifted block in smooth region, and the transform coefficients of the shifted block are modified by weighting the average of three coefficients of the block. Mean square difference of slope criterion is used to judge the efficiency of the proposed algorithm. Simulation results show that the new method not only obtains satisfactory image quality, but also maintains high frequency information.展开更多
With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color image...With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective.展开更多
With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image t...With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of int...Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.展开更多
In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compresse...In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compressed sensing(CS) theory was proposed, which has earned great concern as it can compress an image with a low compression rate, meanwhile the original image can be perfectly reconstructed from only a few compressed data. The CS theory is used to transmit the high resolution astronomical image and build the simulation environment where there is communication between the satellite and the Earth. Number experimental results show that the CS theory can effectively reduce the image transmission and reconstruction time. Even with a very low compression rate, it still can recover a higher quality astronomical image than JPEG and JPEG-2000 compression methods.展开更多
Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bo...Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bounded denoisers. However, the bounded properties of existing plugged Gaussian denoisers have not been proven explicitly. To bridge this gap, we detail a novel provable bounded denoiser termed as BMDual,which combines a trainable denoiser using dual tight frames and the well-known block-matching and 3D filtering(BM3D)denoiser. We incorporate multiple dual frames utilized by BMDual into a novel regularization model induced by a solver. The proposed regularization model is utilized for compressed sensing magnetic resonance imaging(CSMRI). We theoretically show the bound of the BMDual denoiser, the bounded gradient of the CSMRI data-fidelity function, and further demonstrate that the proposed CSMRI algorithm converges. Experimental results also demonstrate that the proposed algorithm has a good convergence behavior, and show the effectiveness of the proposed algorithm.展开更多
In this paper, we are proposing a compression-based multiple color target detection for practical near real-time optical pattern recognition applications. By reducing the size of the color images to its utmost compres...In this paper, we are proposing a compression-based multiple color target detection for practical near real-time optical pattern recognition applications. By reducing the size of the color images to its utmost compression, the speed and the storage of the system are greatly increased. We have used the powerful Fringe-adjusted joint transform correlation technique to successfully detect compression-based multiple targets in colored images. The colored image is decomposed into three fundamental color components images (Red, Green, Blue) and they are separately processed by three-channel correlators. The outputs of the three channels are then combined into a single correlation output. To eliminate the false alarms and zero-order terms due to multiple desired and undesired targets in a scene, we have used the reference shifted phase-encoded and the reference phase-encoded techniques. The performance of the proposed compression-based technique is assessed through many computer simulation tests for images polluted by strong additive Gaussian and Salt & Pepper noises as well as reference occluded images. The robustness of the scheme is demonstrated for severely compressed images (up to 94% ratio), strong noise densities (up to 0.5), and large reference occlusion images (up to 75%).展开更多
A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed forma...A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed format or in a JPEG compressed format.Then the proposed detection method was analyzed and simulated for all the cases of the two tampering scenarios.The tampered region is detected by computing the averaged sum of absolute difference(ASAD) images between the examined image and a resaved JPEG compressed image at different quality factors.The experimental results show the advantages of the proposed method:capability of detecting small and/or multiple tampered regions,simple computation,and hence fast speed in processing.展开更多
The isotherm is an important feature of infrared satellite cloud images (ISCI), which can directly reveal substantial information of cloud systems. The isotherm extraction of ISCI can remove the redundant information ...The isotherm is an important feature of infrared satellite cloud images (ISCI), which can directly reveal substantial information of cloud systems. The isotherm extraction of ISCI can remove the redundant information and therefore helps to compress the information of ISCI. In this paper, an isotherm extraction method is presented. The main aggregate of clouds can be segmented based on mathematical morphology. T algorithm and IP algorithm are then applied to extract the isotherms from the main aggregate of clouds. A concrete example for the extraction of isotherm based on IBM SP2 is described. The result shows that this is a high efficient algorithm. It can be used in feature extractions of infrared images for weather forecasts.展开更多
[Objective] The aim was to present a proposal about a new image compression technology, in order to make the image be able to be stored in a smaller space and be transmitted with smaller bit rate on the premise of gua...[Objective] The aim was to present a proposal about a new image compression technology, in order to make the image be able to be stored in a smaller space and be transmitted with smaller bit rate on the premise of guaranteeing image quality in the rape crop monitoring system in Qinling Mountains. [Method] In the proposal, the color image was divided into brightness images with three fundamental colors, followed by sub-image division and DCT treatment. Then, coefficients of transform domain were quantized, and encoded and compressed as per Huffman coding. Finally, decompression was conducted through inverse process and decompressed images were matched. [Result] The simulation results show that when compression ratio of the color image of rape crops was 11.972 3∶1, human can not distinguish the differences between the decompressed images and the source images with naked eyes; when ratio was as high as 53.565 6∶1, PSNR was still above 30 dD,encoding efficiency achieved over 0.78 and redundancy was less than 0.22. [Conclusion] The results indicate that the proposed color image compression technology can achieve higher compression ratio on the premise of good image quality. In addition, image encoding quality and decompressed images achieved better results, which fully met requirement of image storage and transmission in monitoring system of rape crop in the Qinling Mountains.展开更多
By investigating the limitation of existing wavelet tree based image compression methods, we propose a novel wavelet fractal image compression method in this paper. Briefly, the initial errors are appointed given the ...By investigating the limitation of existing wavelet tree based image compression methods, we propose a novel wavelet fractal image compression method in this paper. Briefly, the initial errors are appointed given the different levels of importance accorded the frequency sublevel band wavelet coefficients. Higher frequency sublevel bands would lead to larger initial errors. As a result, the sizes of sublevel blocks and super blocks would be changed according to the initial errors. The matching sizes between sublevel blocks and super blocks would be changed according to the permitted errors and compression rates. Systematic analyses are performed and the experimental results demonstrate that the proposed method provides a satisfactory performance with a clearly increasing rate of compression and speed of encoding without reducing SNR and the quality of decoded images. Simulation results show that our method is superior to the traditional wavelet tree based methods of fractal image compression.展开更多
Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability...Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression. Texture f...This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression. Texture features is one of the most important properties for the representation of an image. Entropy and maximum entry from co-occurrence matrices are used for representing texture features in an image. For a range block, concerned domain blocks of neighbouring range blocks with similar texture features can be searched. In addition, domain blocks with similar texture features are searched in the ICA search process. Experiments show that in comparison with some typical methods, the proposed algorithm significantly speeds up the encoding process and achieves a higher compression ratio, with a slight diminution in the quality of the reconstructed image; in comparison with a spatial correlation scheme, the proposed scheme spends much less encoding time while the compression ratio and the quality of the reconstructed image are almost the same.展开更多
The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear...The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear morphological predictor follows the update adaptive lifting to result in fewer large wavelet coefficients near edges for reducing coding. The nonlinear adaptive wavelet transforms can also allow perfect reconstruction without any overhead cost. Experiment results are given to show lower entropy of the adaptive transformed images than those of the non-adaptive case and great applicable potentiality in lossless image compresslon.展开更多
文摘To utilize residual redundancy to reduce the error induced by fading channels and decrease the complexity of the field model to describe the probability structure for residual redundancy, a simplified statistical model for residual redundancy and a low complexity joint source-channel decoding(JSCD) algorithm are proposed. The complicated residual redundancy in wavelet compressed images is decomposed into several independent 1-D probability check equations composed of Markov chains and it is regarded as a natural channel code with a structure similar to the low density parity check (LDPC) code. A parallel sum-product (SP) and iterative JSCD algorithm is proposed. Simulation results show that the proposed JSCD algorithm can make full use of residual redundancy in different directions to correct errors and improve the peak signal noise ratio (PSNR) of the reconstructed image and reduce the complexity and delay of JSCD. The performance of JSCD is more robust than the traditional separated encoding system with arithmetic coding in the same data rate.
文摘The better compression rate can be achieved by the traditional vector quantization (VQ) method, and the quality of the recovered image can also be accepted. But the decompressed image quality can not be promoted efficiently, so how to balance the image compression rate and image recovering quality is an important issue, in this paper, an image is transformed by discrete wavelet transform (DWT) to generate its DWT transformed image which can be compressed by the VQ method further. Besides, we compute the values between the DWT transformed image and decompressed DWT transformed image as the difference matrix which is the adjustable basis of the decompressed image quality. By controlling the deviation of the difference matrix, there can be nearly Iossless compression for the VQ method. Experimental results show that when the number of compressed bits by our method is equal to the number of those bits compressed by the VQ method, the quality of our recovered image is better. Moreover, the proposed method has more compression capability comparing with the VQ scheme.
基金Science and Technology Project of Guangdong Province(2006A10201003) 2005 Jinan University StartupProject(51205067) Soft Science Project of Guangdong Province(2006B70103011)
文摘Due to coarse quantization, block-based discrete cosine transform(BDCT) compression methods usually suffer from visible blocking artifacts at the block boundaries. A novel efficient de-blocking method in DCT domain is proposed. A specific criterion for edge detection is given, one-dimensional DCT is applied on each row of the adjacent blocks and the shifted block in smooth region, and the transform coefficients of the shifted block are modified by weighting the average of three coefficients of the block. Mean square difference of slope criterion is used to judge the efficiency of the proposed algorithm. Simulation results show that the new method not only obtains satisfactory image quality, but also maintains high frequency information.
基金This work was supported in part by the National Natural Science Foundation of China under Grants 71571091,71771112the State Key Laboratory of Synthetical Automation for Process Industries Fundamental Research Funds under Grant PAL-N201801the Excellent Talent Training Project of University of Science and Technology Liaoning under Grant 2019RC05.
文摘With the advent of the information security era,it is necessary to guarantee the privacy,accuracy,and dependable transfer of pictures.This study presents a new approach to the encryption and compression of color images.It is predicated on 2D compressed sensing(CS)and the hyperchaotic system.First,an optimized Arnold scrambling algorithm is applied to the initial color images to ensure strong security.Then,the processed images are con-currently encrypted and compressed using 2D CS.Among them,chaotic sequences replace traditional random measurement matrices to increase the system’s security.Third,the processed images are re-encrypted using a combination of permutation and diffusion algorithms.In addition,the 2D projected gradient with an embedding decryption(2DPG-ED)algorithm is used to reconstruct images.Compared with the traditional reconstruction algorithm,the 2DPG-ED algorithm can improve security and reduce computational complexity.Furthermore,it has better robustness.The experimental outcome and the performance analysis indicate that this algorithm can withstand malicious attacks and prove the method is effective.
基金supported in part by collaborative research with Toyota Motor Corporation,in part by ROIS NII Open Collaborative Research under Grant 21S0601,in part by JSPS KAKENHI under Grants 20H00592,21H03424.
文摘With the rapid development of artificial intelligence and the widespread use of the Internet of Things, semantic communication, as an emerging communication paradigm, has been attracting great interest. Taking image transmission as an example, from the semantic communication's view, not all pixels in the images are equally important for certain receivers. The existing semantic communication systems directly perform semantic encoding and decoding on the whole image, in which the region of interest cannot be identified. In this paper, we propose a novel semantic communication system for image transmission that can distinguish between Regions Of Interest (ROI) and Regions Of Non-Interest (RONI) based on semantic segmentation, where a semantic segmentation algorithm is used to classify each pixel of the image and distinguish ROI and RONI. The system also enables high-quality transmission of ROI with lower communication overheads by transmissions through different semantic communication networks with different bandwidth requirements. An improved metric θPSNR is proposed to evaluate the transmission accuracy of the novel semantic transmission network. Experimental results show that our proposed system achieves a significant performance improvement compared with existing approaches, namely, existing semantic communication approaches and the conventional approach without semantics.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
基金the National Natural Science Foundation of China(No.11803036)Climbing Program of Changchun University(No.ZKP202114).
文摘Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.
文摘In the process of image transmission, the famous JPEG and JPEG-2000 compression methods need more transmission time as it is difficult for them to compress the image with a low compression rate. Recently the compressed sensing(CS) theory was proposed, which has earned great concern as it can compress an image with a low compression rate, meanwhile the original image can be perfectly reconstructed from only a few compressed data. The CS theory is used to transmit the high resolution astronomical image and build the simulation environment where there is communication between the satellite and the Earth. Number experimental results show that the CS theory can effectively reduce the image transmission and reconstruction time. Even with a very low compression rate, it still can recover a higher quality astronomical image than JPEG and JPEG-2000 compression methods.
基金supported in part by the National Natural Science Foundation of China (62371414,61901406)the Hebei Natural Science Foundation (F2020203025)+2 种基金the Young Talent Program of Universities and Colleges in Hebei Province (BJ2021044)the Hebei Key Laboratory Project (202250701010046)the Central Government Guides Local Science and Technology Development Fund Projects(216Z1602G)。
文摘Plug-and-play priors are popular for solving illposed imaging inverse problems. Recent efforts indicate that the convergence guarantee of the imaging algorithms using plug-andplay priors relies on the assumption of bounded denoisers. However, the bounded properties of existing plugged Gaussian denoisers have not been proven explicitly. To bridge this gap, we detail a novel provable bounded denoiser termed as BMDual,which combines a trainable denoiser using dual tight frames and the well-known block-matching and 3D filtering(BM3D)denoiser. We incorporate multiple dual frames utilized by BMDual into a novel regularization model induced by a solver. The proposed regularization model is utilized for compressed sensing magnetic resonance imaging(CSMRI). We theoretically show the bound of the BMDual denoiser, the bounded gradient of the CSMRI data-fidelity function, and further demonstrate that the proposed CSMRI algorithm converges. Experimental results also demonstrate that the proposed algorithm has a good convergence behavior, and show the effectiveness of the proposed algorithm.
文摘In this paper, we are proposing a compression-based multiple color target detection for practical near real-time optical pattern recognition applications. By reducing the size of the color images to its utmost compression, the speed and the storage of the system are greatly increased. We have used the powerful Fringe-adjusted joint transform correlation technique to successfully detect compression-based multiple targets in colored images. The colored image is decomposed into three fundamental color components images (Red, Green, Blue) and they are separately processed by three-channel correlators. The outputs of the three channels are then combined into a single correlation output. To eliminate the false alarms and zero-order terms due to multiple desired and undesired targets in a scene, we have used the reference shifted phase-encoded and the reference phase-encoded techniques. The performance of the proposed compression-based technique is assessed through many computer simulation tests for images polluted by strong additive Gaussian and Salt & Pepper noises as well as reference occluded images. The robustness of the scheme is demonstrated for severely compressed images (up to 94% ratio), strong noise densities (up to 0.5), and large reference occlusion images (up to 75%).
基金Project(61172184) supported by the National Natural Science Foundation of ChinaProject(200902482) supported by China Postdoctoral Science Foundation Specially Funded ProjectProject(12JJ6062) supported by the Natural Science Foundation of Hunan Province,China
文摘A blind digital image forensic method for detecting copy-paste forgery between JPEG images was proposed.Two copy-paste tampering scenarios were introduced at first:the tampered image was saved in an uncompressed format or in a JPEG compressed format.Then the proposed detection method was analyzed and simulated for all the cases of the two tampering scenarios.The tampered region is detected by computing the averaged sum of absolute difference(ASAD) images between the examined image and a resaved JPEG compressed image at different quality factors.The experimental results show the advantages of the proposed method:capability of detecting small and/or multiple tampered regions,simple computation,and hence fast speed in processing.
文摘The isotherm is an important feature of infrared satellite cloud images (ISCI), which can directly reveal substantial information of cloud systems. The isotherm extraction of ISCI can remove the redundant information and therefore helps to compress the information of ISCI. In this paper, an isotherm extraction method is presented. The main aggregate of clouds can be segmented based on mathematical morphology. T algorithm and IP algorithm are then applied to extract the isotherms from the main aggregate of clouds. A concrete example for the extraction of isotherm based on IBM SP2 is described. The result shows that this is a high efficient algorithm. It can be used in feature extractions of infrared images for weather forecasts.
基金Supported by Special Fund for Scientific Research of Shannxi Education Department(No:2010JK463)Shaanxi Natural Science Foundation(2011JE012)~~
文摘[Objective] The aim was to present a proposal about a new image compression technology, in order to make the image be able to be stored in a smaller space and be transmitted with smaller bit rate on the premise of guaranteeing image quality in the rape crop monitoring system in Qinling Mountains. [Method] In the proposal, the color image was divided into brightness images with three fundamental colors, followed by sub-image division and DCT treatment. Then, coefficients of transform domain were quantized, and encoded and compressed as per Huffman coding. Finally, decompression was conducted through inverse process and decompressed images were matched. [Result] The simulation results show that when compression ratio of the color image of rape crops was 11.972 3∶1, human can not distinguish the differences between the decompressed images and the source images with naked eyes; when ratio was as high as 53.565 6∶1, PSNR was still above 30 dD,encoding efficiency achieved over 0.78 and redundancy was less than 0.22. [Conclusion] The results indicate that the proposed color image compression technology can achieve higher compression ratio on the premise of good image quality. In addition, image encoding quality and decompressed images achieved better results, which fully met requirement of image storage and transmission in monitoring system of rape crop in the Qinling Mountains.
基金Project 60571049 supported by the National Natural Science Foundation of China
文摘By investigating the limitation of existing wavelet tree based image compression methods, we propose a novel wavelet fractal image compression method in this paper. Briefly, the initial errors are appointed given the different levels of importance accorded the frequency sublevel band wavelet coefficients. Higher frequency sublevel bands would lead to larger initial errors. As a result, the sizes of sublevel blocks and super blocks would be changed according to the initial errors. The matching sizes between sublevel blocks and super blocks would be changed according to the permitted errors and compression rates. Systematic analyses are performed and the experimental results demonstrate that the proposed method provides a satisfactory performance with a clearly increasing rate of compression and speed of encoding without reducing SNR and the quality of decoded images. Simulation results show that our method is superior to the traditional wavelet tree based methods of fractal image compression.
文摘Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression. Texture features is one of the most important properties for the representation of an image. Entropy and maximum entry from co-occurrence matrices are used for representing texture features in an image. For a range block, concerned domain blocks of neighbouring range blocks with similar texture features can be searched. In addition, domain blocks with similar texture features are searched in the ICA search process. Experiments show that in comparison with some typical methods, the proposed algorithm significantly speeds up the encoding process and achieves a higher compression ratio, with a slight diminution in the quality of the reconstructed image; in comparison with a spatial correlation scheme, the proposed scheme spends much less encoding time while the compression ratio and the quality of the reconstructed image are almost the same.
基金Supported by the National Natural Science Foundation of China (69983005)
文摘The paper presents a class of nonlinear adaptive wavelet transforms for lossless image compression. In update step of the lifting the different operators are chosen by the local gradient of original image. A nonlinear morphological predictor follows the update adaptive lifting to result in fewer large wavelet coefficients near edges for reducing coding. The nonlinear adaptive wavelet transforms can also allow perfect reconstruction without any overhead cost. Experiment results are given to show lower entropy of the adaptive transformed images than those of the non-adaptive case and great applicable potentiality in lossless image compresslon.