A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a...A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.展开更多
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although thi...In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.展开更多
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase spac...In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio...For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.展开更多
An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependenci...An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.展开更多
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.展开更多
Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by us...Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.展开更多
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. ...Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).展开更多
Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir...Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.展开更多
In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray images.It remains a major challenge to process,store and transmit such huge volumes of microarray images.So,i...In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray images.It remains a major challenge to process,store and transmit such huge volumes of microarray images.So,image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily.Various techniques have been proposed in the past with applications in different domains.The current research paper presents a novel image compression technique i.e.,optimized Linde–Buzo–Gray(OLBG)with Lempel Ziv Markov Algorithm(LZMA)coding technique called OLBG-LZMA for compressing microarray images without any loss of quality.LBG model is generally used in designing a local optimal codebook for image compression.Codebook construction is treated as an optimizationissue and can be resolved with the help of Grey Wolf Optimization(GWO)algorithm.Once the codebook is constructed by LBGGWO algorithm,LZMA is employed for the compression of index table and raise its compression efficiency additionally.Experiments were performed on high resolution Tissue Microarray(TMA)image dataset of 50 prostate tissue samples collected from prostate cancer patients.The compression performance of the proposed coding esd compared with recently proposed techniques.The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.展开更多
We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results sh...We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance.展开更多
A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the <...A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the <span style="font-family:Verdana;">digitized s-EMG signal. A Discrete Cosine Transforms (DCT) is applied to the MDWPT coefficients (only on detail coefficients). The MDWPT+ DCT coeffici</span><span style="font-family:Verdana;">ents are quantized with a Uniform Scalar Dead-Zone Quantizer (USD</span><span style="font-family:Verdana;">ZQ)</span><span style="font-family:Verdana;">. An arithmetic coder is employed for the entropy coding of symbol streams. The</span><span style="font-family:Verdana;"> proposed approach was tested on more than 35 act</span><span style="font-family:Verdana;">uals S-EMG signals divided into three categories. The proposed approach was evaluated by the foll</span><span style="font-family:Verdana;">owing parameters: Compression Factor (CF), Signal to Noise Ratio (SN</span><span style="font-family:Verdana;">R), </span><span style="font-family:Verdana;">Percent Root mean square Difference (PRD), Mean Frequency Distortion (MFD) </span><span style="font-family:Verdana;">and the Mean Square Error (MSE). Simulation results show that the proposed coding algorithm outperforms some recently developed s-EMG compression algorithms.</span>展开更多
Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes...Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.展开更多
In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format des...In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format design. This paper presents a new design of compressing format named 'CZ format' which supports the data window size up to 4 GB and has some advantages in the mass data compression. Using this format, a compressing shareware named 'ComZip' is designed. The experiment results support that ComZip has better compression ratio than WinZip, Bzip2 and are compressed. And ComZip has the potential to beat 7-zip in WinRAR in most cases, especially when GBs or TBs of mass data future as the data window size exceeds 128 MB.展开更多
基金The National Natural Science Foundation of China (No. 60202006)
文摘A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Doctoral Program Foundation of Institution of Higher Education of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.
基金Project supported by the National Natural Science Foundation of China (Grant No. 60971104)the Basic Research Foundation of Sichuan Province,China (Grant No. 2006J013-011)the Outstanding Young Researchers Foundation of Sichuan Province,China (Grant No. 09ZQ026-091)
文摘In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金This project is supported by National Natural Science Foundation of China(No.61202439)partly supported by Scientific Research Foundation of Hunan Provincial Education Department of China(No.16A008)partly supported by Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems(No.2017TP1016).
文摘For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.
基金Supported by the National Natural Science Foundation of China(No.61076021)the National Basic Research Program of China(No.2009CB320903)China Postdoctoral Science Foundation(No.2012M511364)
文摘An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
文摘Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.
文摘Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
文摘Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.
文摘In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray images.It remains a major challenge to process,store and transmit such huge volumes of microarray images.So,image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily.Various techniques have been proposed in the past with applications in different domains.The current research paper presents a novel image compression technique i.e.,optimized Linde–Buzo–Gray(OLBG)with Lempel Ziv Markov Algorithm(LZMA)coding technique called OLBG-LZMA for compressing microarray images without any loss of quality.LBG model is generally used in designing a local optimal codebook for image compression.Codebook construction is treated as an optimizationissue and can be resolved with the help of Grey Wolf Optimization(GWO)algorithm.Once the codebook is constructed by LBGGWO algorithm,LZMA is employed for the compression of index table and raise its compression efficiency additionally.Experiments were performed on high resolution Tissue Microarray(TMA)image dataset of 50 prostate tissue samples collected from prostate cancer patients.The compression performance of the proposed coding esd compared with recently proposed techniques.The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.
文摘We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance.
文摘A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the <span style="font-family:Verdana;">digitized s-EMG signal. A Discrete Cosine Transforms (DCT) is applied to the MDWPT coefficients (only on detail coefficients). The MDWPT+ DCT coeffici</span><span style="font-family:Verdana;">ents are quantized with a Uniform Scalar Dead-Zone Quantizer (USD</span><span style="font-family:Verdana;">ZQ)</span><span style="font-family:Verdana;">. An arithmetic coder is employed for the entropy coding of symbol streams. The</span><span style="font-family:Verdana;"> proposed approach was tested on more than 35 act</span><span style="font-family:Verdana;">uals S-EMG signals divided into three categories. The proposed approach was evaluated by the foll</span><span style="font-family:Verdana;">owing parameters: Compression Factor (CF), Signal to Noise Ratio (SN</span><span style="font-family:Verdana;">R), </span><span style="font-family:Verdana;">Percent Root mean square Difference (PRD), Mean Frequency Distortion (MFD) </span><span style="font-family:Verdana;">and the Mean Square Error (MSE). Simulation results show that the proposed coding algorithm outperforms some recently developed s-EMG compression algorithms.</span>
基金Project supported by the National Natural Science Foundation of China(No.61100074)the Fundamental Research Funds for the Central Universities,China(No.2013QNA5008)
文摘Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.
文摘In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format design. This paper presents a new design of compressing format named 'CZ format' which supports the data window size up to 4 GB and has some advantages in the mass data compression. Using this format, a compressing shareware named 'ComZip' is designed. The experiment results support that ComZip has better compression ratio than WinZip, Bzip2 and are compressed. And ComZip has the potential to beat 7-zip in WinRAR in most cases, especially when GBs or TBs of mass data future as the data window size exceeds 128 MB.