An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcti...An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcting in a single process, with superior performance compared with traditional separated techniques. The concept of adaptiveness is applied not only to the source model but also to the amount of coding redundancy. In addition, an improved branch metric computing algorithm and a faster sequential searching algorithm compared with the system proposed by Grangetto were proposed. The proposed system is tested in the case of image transmission over the AWGN channel, and compared with traditional separated system in terms of packet error rate and complexity. Both hard and soft decoding were taken into account.展开更多
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although thi...In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.展开更多
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio...For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.展开更多
A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed. It combines source coding and error correction tasks into one unified process by introducing an adaptive ...A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed. It combines source coding and error correction tasks into one unified process by introducing an adaptive forbidden symbol. The proposed system achieves fixed length code words by adaptively adjusting the probability of the forbidden symbol and adding tail digits of variable length. The corresponding improved MAP decoding metric was derived. The proposed system can improve the performance. Simulations were performed on AWGN channels with various noise levels by using both hard and soft decision with BPSK modulation.The results show its performance is slightly better than that of our adaptive arithmetic error correcting coding system using a forbidden symbol.展开更多
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase spac...In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.展开更多
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.展开更多
Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by us...Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.展开更多
A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a...A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. ...Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).展开更多
Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes...Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.展开更多
In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables a...In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.展开更多
An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependenci...An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.展开更多
We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results sh...We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance.展开更多
In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and ...In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
文摘An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcting in a single process, with superior performance compared with traditional separated techniques. The concept of adaptiveness is applied not only to the source model but also to the amount of coding redundancy. In addition, an improved branch metric computing algorithm and a faster sequential searching algorithm compared with the system proposed by Grangetto were proposed. The proposed system is tested in the case of image transmission over the AWGN channel, and compared with traditional separated system in terms of packet error rate and complexity. Both hard and soft decoding were taken into account.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Doctoral Program Foundation of Institution of Higher Education of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure.
基金This project is supported by National Natural Science Foundation of China(No.61202439)partly supported by Scientific Research Foundation of Hunan Provincial Education Department of China(No.16A008)partly supported by Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems(No.2017TP1016).
文摘For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.
基金The National Natural Science Foundation ofChina(No60332030)
文摘A new arithmetic coding system combining source channel coding and maximum a posteriori decoding were proposed. It combines source coding and error correction tasks into one unified process by introducing an adaptive forbidden symbol. The proposed system achieves fixed length code words by adaptively adjusting the probability of the forbidden symbol and adding tail digits of variable length. The corresponding improved MAP decoding metric was derived. The proposed system can improve the performance. Simulations were performed on AWGN channels with various noise levels by using both hard and soft decision with BPSK modulation.The results show its performance is slightly better than that of our adaptive arithmetic error correcting coding system using a forbidden symbol.
基金Project supported by the National Natural Science Foundation of China (Grant No. 60971104)the Basic Research Foundation of Sichuan Province,China (Grant No. 2006J013-011)the Outstanding Young Researchers Foundation of Sichuan Province,China (Grant No. 09ZQ026-091)
文摘In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
文摘Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption.
基金The National Natural Science Foundation of China (No. 60202006)
文摘A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
文摘Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
基金Project supported by the National Natural Science Foundation of China(No.61100074)the Fundamental Research Funds for the Central Universities,China(No.2013QNA5008)
文摘Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.
基金Supported by the National Natural Science Foundation of China under Grant No. 60333020 and the Natural Science Foundation of Beijing under Grant No. 4041003.
文摘In this paper, a Context-based 2D Variable Length Coding (C2DVLC) method for coding the transformed residuals in AVS video coding standard is presented. One feature in C2DVLC is the usage of multiple 2D-VLC tables and another feature is the usage of simple Exponential-Golomb codes. C2DVLC employs context-based adaptive multiple table coding to exploit the statistical correlation between DCT coefficients of one block for higher coding efficiency. ExpGolomb codes are applied to code the pairs of the run-length of zero coefficients and the nonzero coefficient for lower storage requirement. C2DVLC is a low complexity coder in terms of both computational time and memory requirement. The experimental results show that C2DVLC can gain 0.34dB in average for the tested videos when compared with the traditional 2D-VLC coding method like that used in MPEG-2. And compared with CAVLC in H.264/AVC, C2DVLC shows similar coding efficiency.
基金Supported by the National Natural Science Foundation of China(No.61076021)the National Basic Research Program of China(No.2009CB320903)China Postdoctoral Science Foundation(No.2012M511364)
文摘An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.
文摘We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance.
文摘In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.