A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the f...In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.展开更多
Reversible data hiding in encrypted image(RDHEI)is a widely used technique for privacy protection,which has been developed in many applications that require high confidentiality,authentication and integrity.Proposed R...Reversible data hiding in encrypted image(RDHEI)is a widely used technique for privacy protection,which has been developed in many applications that require high confidentiality,authentication and integrity.Proposed RDHEI methods do not allow high embedding rate while ensuring losslessly recover the original image.Moreover,the ciphertext form of encrypted image in RDHEI framework is easy to cause the attention of attackers.This paper proposes a reversible data hiding algorithm based on image camouflage encryption and bit plane compression.A camouflage encryption algorithm is used to transform a secret image into another meaningful target image,which can cover both secret image and encryption behavior based on“plaintext to plaintext”transformation.An edge optimization method based on prediction algorithm is designed to improve the image camouflage encryption quality.The reversible data hiding based bit-plane level compression,which can improve the redundancy of the bit plane by Gray coding,is used to embed watermark in the camouflage image.The experimental results also show the superior performance of the method in terms of embedding capacity and image quality.展开更多
To fulfill the requirements of data security in environments with nonequivalent resources,a high capacity data hiding scheme in encrypted image based on compressive sensing(CS)is proposed by fully utilizing the adapta...To fulfill the requirements of data security in environments with nonequivalent resources,a high capacity data hiding scheme in encrypted image based on compressive sensing(CS)is proposed by fully utilizing the adaptability of CS to nonequivalent resources.The original image is divided into two parts:one part is encrypted with traditional stream cipher;the other part is turned to the prediction error and then encrypted based on CS to vacate room simultaneously.The collected non-image data is firstly encrypted with simple stream cipher.For data security management,the encrypted non-image data is then embedded into the encrypted image,and the scrambling operation is used to further improve security.Finally,the original image and non-image data can be separably recovered and extracted according to the request from the valid users with different access rights.Experimental results demonstrate that the proposed scheme outperforms other data hiding methods based on CS,and is more suitable for nonequivalent resources.展开更多
A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins includ...The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.展开更多
Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compr...Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.展开更多
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b...The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Cons...Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.展开更多
To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented...To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.展开更多
Communications capability can be a significant constraint on the utility of a spacecraft. While conventionally enhanced through the use of a larger transmitting or receiving antenna or through augmenting transmission ...Communications capability can be a significant constraint on the utility of a spacecraft. While conventionally enhanced through the use of a larger transmitting or receiving antenna or through augmenting transmission power, communications capability can also be enhanced via incorporating more data in every unit of transmission. Model Based Transmission Reduction (MBTR) increases the mission utility of spacecraft via sending higher-level messages which rely on preshared (or, in some cases, co-transmitted) data. Because of this a priori knowledge, the amount of information contained in a MBTR message significantly exceeds the amount the amount of information in a conventional message. MBTR has multiple levels of operation;the lowest, Model Based Data Transmission (MBDT), utilizes a pre-shared lower-resolution data frame, which is augmented in areas of significant discrepancy with data from the higher-resolution source. MBDT is examined, in detail, herein and several approaches to minimizing the required bandwidth for conveying data required to conform to a minimum level of accuracy are considered. Also considered are ways of minimizing transmission requirements when both a model and change data required to attain a desired minimum discrepancy threshold must be transmitted. These possible solutions are compared to alternate transmission techniques including several forms of image compression.展开更多
Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluat...Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluation metrics,which consist of four aspects including compression statistical distortion,sensor performance evaluation,data application performance and image quality,are suited to the comprehensive and systematical analysis of the impact of lossy compression in spaceborne hyperspectral remote sensing data quality.Furthermore,the evaluation results would be helpful to the selection and optimization of satellite data compression scheme.展开更多
First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantizat...First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.展开更多
A new efficient method based on Quadtree Representation and Vector Entropy Coding (QRVEC) for encoding the wavelet transform coefficients of images is presented. In addition, how to flexibly control the coder’ s outp...A new efficient method based on Quadtree Representation and Vector Entropy Coding (QRVEC) for encoding the wavelet transform coefficients of images is presented. In addition, how to flexibly control the coder’ s output bit rate is also investigated.展开更多
A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic ...A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.展开更多
Latest advancements made in the processing abilities of smartdevices have resulted in the designing of Intelligent Internet of Things (IoT)environment. This advanced environment enables the nodes to connect, collect, ...Latest advancements made in the processing abilities of smartdevices have resulted in the designing of Intelligent Internet of Things (IoT)environment. This advanced environment enables the nodes to connect, collect, perceive, and examine useful data from its surroundings. Wireless Multimedia Surveillance Networks (WMSNs) form a vital part in IoT-assistedenvironment since it contains visual sensors that examine the surroundingsfrom a number of overlapping views by capturing the images incessantly.Since IoT devices generate a massive quantity of digital media, it is thereforerequired to save the media, especially images, in a secure way. In order toachieve security, encryption techniques as well as compression techniques areemployed to reduce the amount of digital data, being communicated overthe network. Encryption Then Compression (ETC) techniques pave a wayfor secure and compact transmission of the available data to prevent unauthorized access. With this background, the current research paper presentsa new ETC technique to accomplish image security in IoT environment.The proposed model involves three major processes namely, IoT-based imageacquisition, encryption, and compression. The presented model involves optimal Signcryption Technique with Whale Optimization Algorithm (NMWOA)abbreviated as ST-NMWOA. The optimal key generation of signcryptiontechnique takes place with the help of NMWOA. Besides, the presented modelalso uses Discrete Fourier Transform (DFT) and Matrix Minimization (MM)algorithm-based compression technique. Extensive set of experimental analysis was conducted to validate the effective performance of the proposed model.The obtained values infer that the presented model is superior in terms of bothcompression efficiency and data secrecy in resource-limited IoT environment.展开更多
We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous...We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous experiments were done on typical images to calculate (using Matlab) the entropy before and after wavelet transform. It was verified that, to obtain minimal entropy, a three-layer decomposition should be adopted rather than higher orders. The result achieved by using biorthogonal wavelet decomposition is better than that of the orthogonal wavelet decomposition. The results are not directly proportional to the vanishing moment, however.展开更多
We present a novel compression algorithm for 2D scientific data and images based on exponentially-convergent adaptive higher-order finite element methods(FEM).So far,FEM has been used mainly for the solution of part...We present a novel compression algorithm for 2D scientific data and images based on exponentially-convergent adaptive higher-order finite element methods(FEM).So far,FEM has been used mainly for the solution of partial differential equations(PDE),but we show that it can be applied to data and image compression easily.The adaptive compression algorithm is trivial compared to adaptive FEM algorithms for PDE since the error estimation step is not present.The method attains extremely high compression rates and is able to compress a data set or an image with any prescribed error tolerance.Compressed data and images are stored in the standard FEM format,which makes it possible to analyze them using standard PDE visualization software.Numerical examples are shown.The method is presented in such a way that it can be understood by readers who may not be experts of the finite element method.展开更多
In order to achieve image encryption and data embedding simultaneously, a reversible data hiding(RDH) algorithm for encrypted-compressed image in wavelet domain is proposed. This scheme employs the quality controllabl...In order to achieve image encryption and data embedding simultaneously, a reversible data hiding(RDH) algorithm for encrypted-compressed image in wavelet domain is proposed. This scheme employs the quality controllable parameter. Moreover it has larger embedding capacity and smaller quality control parameters than other methods in literatures. Meanwhile, the cross chaotic map is employed to generate chaotic sequences, and the total keys of the algorithm is far large. Experimental results and comparisons show that the proposed scheme has large capacity, high security, and strong resistance to brute-force.展开更多
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
文摘In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.
基金supported in part by the National Key R&D Program of China(2019YFB1406504)the National Natural Science Foundation of China(U1836108,U1936216,62002197).
文摘Reversible data hiding in encrypted image(RDHEI)is a widely used technique for privacy protection,which has been developed in many applications that require high confidentiality,authentication and integrity.Proposed RDHEI methods do not allow high embedding rate while ensuring losslessly recover the original image.Moreover,the ciphertext form of encrypted image in RDHEI framework is easy to cause the attention of attackers.This paper proposes a reversible data hiding algorithm based on image camouflage encryption and bit plane compression.A camouflage encryption algorithm is used to transform a secret image into another meaningful target image,which can cover both secret image and encryption behavior based on“plaintext to plaintext”transformation.An edge optimization method based on prediction algorithm is designed to improve the image camouflage encryption quality.The reversible data hiding based bit-plane level compression,which can improve the redundancy of the bit plane by Gray coding,is used to embed watermark in the camouflage image.The experimental results also show the superior performance of the method in terms of embedding capacity and image quality.
基金The work was funded by the National Natural Science Foundation of China(Grant Nos.61572089,61502399,61633005)the Chongqing Research Program of Basic Research and Frontier Technology(Grant No.cstc2017jcyjBX0008)+3 种基金the Project Supported by Graduate Student Research and Innovation Foundation of Chongqing(Grant No.CYB17026)the Chongqing Postgraduate Education Reform Project(Grant No.yjg183018)the Chongqing University Postgraduate Education Reform Project(Grant No.cquyjg18219)the Fundamental Research Funds for the Central Universities(Grant Nos.106112017CDJQJ188830,106112017CDJXY180005).
文摘To fulfill the requirements of data security in environments with nonequivalent resources,a high capacity data hiding scheme in encrypted image based on compressive sensing(CS)is proposed by fully utilizing the adaptability of CS to nonequivalent resources.The original image is divided into two parts:one part is encrypted with traditional stream cipher;the other part is turned to the prediction error and then encrypted based on CS to vacate room simultaneously.The collected non-image data is firstly encrypted with simple stream cipher.For data security management,the encrypted non-image data is then embedded into the encrypted image,and the scrambling operation is used to further improve security.Finally,the original image and non-image data can be separably recovered and extracted according to the request from the valid users with different access rights.Experimental results demonstrate that the proposed scheme outperforms other data hiding methods based on CS,and is more suitable for nonequivalent resources.
文摘A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
文摘The aggregation of data in recent years has been expanding at an exponential rate. There are various data generating sources that are responsible for such a tremendous data growth rate. Some of the data origins include data from the various social media, footages from video cameras, wireless and wired sensor network measurements, data from the stock markets and other financial transaction data, supermarket transaction data and so on. The aforementioned data may be high dimensional and big in Volume, Value, Velocity, Variety, and Veracity. Hence one of the crucial challenges is the storage, processing and extraction of relevant information from the data. In the special case of image data, the technique of image compressions may be employed in reducing the dimension and volume of the data to ensure it is convenient for processing and analysis. In this work, we examine a proof-of-concept multiresolution analytics that uses wavelet transforms, that is one popular mathematical and analytical framework employed in signal processing and representations, and we study its applications to the area of compressing image data in wireless sensor networks. The proposed approach consists of the applications of wavelet transforms, threshold detections, quantization data encoding and ultimately apply the inverse transforms. The work specifically focuses on multi-resolution analysis with wavelet transforms by comparing 3 wavelets at the 5 decomposition levels. Simulation results are provided to demonstrate the effectiveness of the methodology.
文摘Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.
基金This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)in part by the 2022 Yeungnam University Research Grant.
文摘The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
基金supported by the National High Technology Research and Development Program of China (Grant No. 863-2-5-1-13B)
文摘Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.
基金supported by the National Science Foundation of China(60872109)the Program for New Century Excellent Talents in University(NCET-06-0900)
文摘To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.
文摘Communications capability can be a significant constraint on the utility of a spacecraft. While conventionally enhanced through the use of a larger transmitting or receiving antenna or through augmenting transmission power, communications capability can also be enhanced via incorporating more data in every unit of transmission. Model Based Transmission Reduction (MBTR) increases the mission utility of spacecraft via sending higher-level messages which rely on preshared (or, in some cases, co-transmitted) data. Because of this a priori knowledge, the amount of information contained in a MBTR message significantly exceeds the amount the amount of information in a conventional message. MBTR has multiple levels of operation;the lowest, Model Based Data Transmission (MBDT), utilizes a pre-shared lower-resolution data frame, which is augmented in areas of significant discrepancy with data from the higher-resolution source. MBDT is examined, in detail, herein and several approaches to minimizing the required bandwidth for conveying data required to conform to a minimum level of accuracy are considered. Also considered are ways of minimizing transmission requirements when both a model and change data required to attain a desired minimum discrepancy threshold must be transmitted. These possible solutions are compared to alternate transmission techniques including several forms of image compression.
基金supported by the Chinese 863 Plan Program under Grant 2012AA121504
文摘Based on the raw data of spaceborne dispersive and interferometry imaging spectrometer,a set of quality evaluation metrics for compressed hyperspectral data is initially established in this paper.These quality evaluation metrics,which consist of four aspects including compression statistical distortion,sensor performance evaluation,data application performance and image quality,are suited to the comprehensive and systematical analysis of the impact of lossy compression in spaceborne hyperspectral remote sensing data quality.Furthermore,the evaluation results would be helpful to the selection and optimization of satellite data compression scheme.
文摘First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.
文摘A new efficient method based on Quadtree Representation and Vector Entropy Coding (QRVEC) for encoding the wavelet transform coefficients of images is presented. In addition, how to flexibly control the coder’ s output bit rate is also investigated.
文摘A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.
文摘Latest advancements made in the processing abilities of smartdevices have resulted in the designing of Intelligent Internet of Things (IoT)environment. This advanced environment enables the nodes to connect, collect, perceive, and examine useful data from its surroundings. Wireless Multimedia Surveillance Networks (WMSNs) form a vital part in IoT-assistedenvironment since it contains visual sensors that examine the surroundingsfrom a number of overlapping views by capturing the images incessantly.Since IoT devices generate a massive quantity of digital media, it is thereforerequired to save the media, especially images, in a secure way. In order toachieve security, encryption techniques as well as compression techniques areemployed to reduce the amount of digital data, being communicated overthe network. Encryption Then Compression (ETC) techniques pave a wayfor secure and compact transmission of the available data to prevent unauthorized access. With this background, the current research paper presentsa new ETC technique to accomplish image security in IoT environment.The proposed model involves three major processes namely, IoT-based imageacquisition, encryption, and compression. The presented model involves optimal Signcryption Technique with Whale Optimization Algorithm (NMWOA)abbreviated as ST-NMWOA. The optimal key generation of signcryptiontechnique takes place with the help of NMWOA. Besides, the presented modelalso uses Discrete Fourier Transform (DFT) and Matrix Minimization (MM)algorithm-based compression technique. Extensive set of experimental analysis was conducted to validate the effective performance of the proposed model.The obtained values infer that the presented model is superior in terms of bothcompression efficiency and data secrecy in resource-limited IoT environment.
基金the Natural Science Foundation of China (No. 60472037).
文摘We studied the variation of image entropy before and after wavelet decomposition, the optimal number of wavelet decomposition layers, and the effect of wavelet bases and image frequency components on entropy. Numerous experiments were done on typical images to calculate (using Matlab) the entropy before and after wavelet transform. It was verified that, to obtain minimal entropy, a three-layer decomposition should be adopted rather than higher orders. The result achieved by using biorthogonal wavelet decomposition is better than that of the orthogonal wavelet decomposition. The results are not directly proportional to the vanishing moment, however.
基金the financial support of the Czech Science Foundation(Project No.102/07/0496)and of the Grant Agency of the Academy of Sciences of the Czech Republic(Project No.IAA100760702)。
文摘We present a novel compression algorithm for 2D scientific data and images based on exponentially-convergent adaptive higher-order finite element methods(FEM).So far,FEM has been used mainly for the solution of partial differential equations(PDE),but we show that it can be applied to data and image compression easily.The adaptive compression algorithm is trivial compared to adaptive FEM algorithms for PDE since the error estimation step is not present.The method attains extremely high compression rates and is able to compress a data set or an image with any prescribed error tolerance.Compressed data and images are stored in the standard FEM format,which makes it possible to analyze them using standard PDE visualization software.Numerical examples are shown.The method is presented in such a way that it can be understood by readers who may not be experts of the finite element method.
基金Supported by the Chongqing Research Program of Basic Research and Frontier Technology(cstc2017jcyjBX0008)the Graduate Student Research and Innovation Foundation of Chongqing(CYB17026)the Basic Applied Research Program of Qinghai Province(2019-ZJ-7099)
文摘In order to achieve image encryption and data embedding simultaneously, a reversible data hiding(RDH) algorithm for encrypted-compressed image in wavelet domain is proposed. This scheme employs the quality controllable parameter. Moreover it has larger embedding capacity and smaller quality control parameters than other methods in literatures. Meanwhile, the cross chaotic map is employed to generate chaotic sequences, and the total keys of the algorithm is far large. Experimental results and comparisons show that the proposed scheme has large capacity, high security, and strong resistance to brute-force.