期刊文献+
共找到15篇文章
< 1 >
每页显示 20 50 100
Joint Distortion Model for Progressive Image Transmission Using Error Correcting Arithmetic Codes
1
作者 刘军清 孙军 龙沪强 《Journal of Shanghai Jiaotong university(Science)》 EI 2008年第1期16-20,共5页
A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a... A novel joint source channel distortion model was proposed, which can essentially estimate the average distortion in progressive image transmission. To improve the precision of the model, the redundancy generated by a forbidden symbol in the arithmetic codes is used to distinguish the quantization distortion and the channel distortion, all the coefficients from the first error one to the end of the sequence are set to be a value within the variance range of the coefficients instead of zero, then the error propagation coming from the entropy coding can be essentially estimated, which is disregarded in the most conventional joint source channel coding (JSCC) systems. The precision of the model in terms of average peak-signal-to-noise has been improved about 0.5 dB compared to classical works. An efficient unequal error protection system based on the model is developed, and can be used in the wireless communication systems. 展开更多
关键词 joint source channel coding (JSCC) distortion model arithmetic codes forbidden symbol unequal error protection
下载PDF
Cryptanalysis of a chaos-based cryptosystem with an embedded adaptive arithmetic coder 被引量:2
2
作者 王兴元 谢旖欣 《Chinese Physics B》 SCIE EI CAS CSCD 2011年第8期97-105,共9页
In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although thi... In this paper, we analyse a new chaos-based cryptosystem with an embedded adaptive arithmetic coder, which was proposed by Li Heng-Jian and Zhang J S (Li H J and Zhang J S 2010 Chin. Phys. B 19 050508). Although this new method has a better compression performance than its original version, it is found that there are some problems with its security and decryption processes. In this paper, it is shown how to obtain a great deal of plain text from the cipher text without prior knowledge of the secret key. After discussing the security and decryption problems of the Li Heng-Jian et al. algorithm, we propose an improved chaos-based cryptosystem with an embedded adaptive arithmetic coder that is more secure. 展开更多
关键词 CHAOS CRYPTOGRAPHY compression arithmetic coding
下载PDF
Embedding adaptive arithmetic coder in chaos-based cryptography
3
作者 李恒建 张家树 《Chinese Physics B》 SCIE EI CAS CSCD 2010年第5期121-129,共9页
In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase spac... In this study an adaptive arithmetic coder is embedded in the Baptista-type chaotic cryptosystem for implementing secure data compression. To build the multiple lookup tables of secure data compression, the phase space of chaos map with a uniform distribution in the search mode is divided non-uniformly according to the dynamic probability estimation of plaintext symbols. As a result, more probable symbols are selected according to the local statistical characters of plaintext and the required number of iterations is small since the more probable symbols have a higher chance to be visited by the chaotic search trajectory. By exploiting non-uniformity in the probabilities under which a number of iteration to be coded takes on its possible values, the compression capability is achieved by adaptive arithmetic code. Therefore, the system offers both compression and security. Compared with original arithmetic coding, simulation results on Calgary Corpus files show that the proposed scheme suffers from a reduction in compression performance less than 12% and is not susceptible to previously carried out attacks on arithmetic coding algorithms. 展开更多
关键词 CHAOS CRYPTOGRAPHY compression arithmetic coding
下载PDF
Quantitative Comparative Study of the Performance of Lossless Compression Methods Based on a Text Data Model
4
作者 Namogo Silué Sié Ouattara +1 位作者 Mouhamadou Dosso Alain Clément 《Open Journal of Applied Sciences》 2024年第7期1944-1962,共19页
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform... Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding. 展开更多
关键词 arithmetic Coding BWT Compression Ratio Comparative Study Compression Techniques Shannon-Fano HUFFMAN Lossless Compression LZW PERFORMANCE REDUNDANCY RLE Text Data Tunstall
下载PDF
Reversible Natural Language Watermarking Using Synonym Substitution and Arithmetic Coding 被引量:6
5
作者 Lingyun Xiang Yan Li +2 位作者 Wei Hao Peng Yang Xiaobo Shen 《Computers, Materials & Continua》 SCIE EI 2018年第6期541-559,共19页
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio... For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity. 展开更多
关键词 arithmetic coding synonym substitution lossless compression reversible watermarking.
下载PDF
An adaptive pipelining scheme for H.264/AVC CABAC decoder 被引量:1
6
作者 陈杰 Ding Dandan Yu Lu 《High Technology Letters》 EI CAS 2013年第4期391-397,共7页
An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependenci... An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding. 展开更多
关键词 H.264/AVC context-based adaptive binary arithmetic coding (CABAC) ADAPTIVE PIPELINE data dependency data hazard
下载PDF
An efficient adaptive arithmetic coding image compression technology
7
作者 王兴元 云娇娇 张永雷 《Chinese Physics B》 SCIE EI CAS CSCD 2011年第10期239-245,共7页
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t... This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. 展开更多
关键词 arithmetic coding ADAPTIVE image compression
下载PDF
An Improved Arithmetic Coding Algorithm
8
作者 海梅 张建军 倪兴芳 《Journal of Shanghai University(English Edition)》 CAS 2004年第4期455-458,共4页
Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by us... Arithmetic coding is the most powerful technique for statiscal lossless encoding that has attracted much attention in recent years. In this paper, we presents a new implementation of bit-level arithmetic coding by use of integer additions and shifts. The new algorithm has less computation complexity and is more flexible to use, and thus is very suitable for software and hardware design. We also discuss the application of the algorithm to the data encryption. 展开更多
关键词 arithmetic coding data encryption.
下载PDF
Medical Image Compression Using Wrapping Based Fast Discrete Curvelet Transform and Arithmetic Coding
9
作者 P. Anandan R. S. Sabeenian 《Circuits and Systems》 2016年第8期2059-2069,共11页
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. ... Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR). 展开更多
关键词 Medical Image Compression Discrete Curvelet Transform Fast Discrete Curvelet Transform arithmetic Coding Peak Signal to Noise Ratio Compression Ratio
下载PDF
Spatio-Temporal Context-Guided Algorithm for Lossless Point Cloud Geometry Compression
10
作者 ZHANG Huiran DONG Zhen WANG Mingsheng 《ZTE Communications》 2023年第4期17-28,共12页
Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir... Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes. 展开更多
关键词 point cloud geometry compression single-frame point clouds multi-frame point clouds predictive coding arithmetic coding
下载PDF
An Optimal Lempel Ziv Markov Based Microarray Image Compression Algorithm 被引量:1
11
作者 R.Sowmyalakshmi Mohamed Ibrahim Waly +4 位作者 Mohamed Yacin Sikkandar T.Jayasankar Sayed Sayeed Ahmad Rashmi Rani Suresh Chavhan 《Computers, Materials & Continua》 SCIE EI 2021年第11期2245-2260,共16页
In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray images.It remains a major challenge to process,store and transmit such huge volumes of microarray images.So,i... In the recent years,microarray technology gained attention for concurrent monitoring of numerous microarray images.It remains a major challenge to process,store and transmit such huge volumes of microarray images.So,image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily.Various techniques have been proposed in the past with applications in different domains.The current research paper presents a novel image compression technique i.e.,optimized Linde–Buzo–Gray(OLBG)with Lempel Ziv Markov Algorithm(LZMA)coding technique called OLBG-LZMA for compressing microarray images without any loss of quality.LBG model is generally used in designing a local optimal codebook for image compression.Codebook construction is treated as an optimizationissue and can be resolved with the help of Grey Wolf Optimization(GWO)algorithm.Once the codebook is constructed by LBGGWO algorithm,LZMA is employed for the compression of index table and raise its compression efficiency additionally.Experiments were performed on high resolution Tissue Microarray(TMA)image dataset of 50 prostate tissue samples collected from prostate cancer patients.The compression performance of the proposed coding esd compared with recently proposed techniques.The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques. 展开更多
关键词 arithmetic coding dictionary based coding Lempel-Ziv Markov chain algorithm Lempel-Ziv-Welch coding tissue microarray
下载PDF
Joint Source-Channel Decoding Scheme for Image Transmission over Wireless Channel
12
作者 Xiao Dong-liang Sun Hong 《Wuhan University Journal of Natural Sciences》 CAS 2002年第3期307-312,共6页
We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results sh... We improve the iterative decoding algorithm by utilizing the “leaked” residual redundancy at the output of the source encoder without changing the encoder structure for the noisy channel. The experimental results show that using the residual redundancy of the compressed source in channel decoding is an effective method to improve the error correction performance. 展开更多
关键词 turbo code joint source-channel decoding residual redundancy arithmetic coding
下载PDF
Contribution to S-EMG Signal Compression in 1D by the Combination of the Modified Discrete Wavelet Packet Transform (MDWPT) and the Discrete Cosine Transform (DCT)
13
作者 Colince Welba Aimé Joseph Oyobé Okassa +1 位作者 Pascal Ntsama Eloundou Pierre Ele 《Journal of Signal and Information Processing》 2020年第3期35-57,共23页
A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the <... A new Modified Discrete Wavelets Packets Transform (MDWPT) based method for the compression of Surface EMG signal (s-EMG) data is presented. A Modified Discrete Wavelets Packets Transform (MDWPT) is applied to the <span style="font-family:Verdana;">digitized s-EMG signal. A Discrete Cosine Transforms (DCT) is applied to the MDWPT coefficients (only on detail coefficients). The MDWPT+ DCT coeffici</span><span style="font-family:Verdana;">ents are quantized with a Uniform Scalar Dead-Zone Quantizer (USD</span><span style="font-family:Verdana;">ZQ)</span><span style="font-family:Verdana;">. An arithmetic coder is employed for the entropy coding of symbol streams. The</span><span style="font-family:Verdana;"> proposed approach was tested on more than 35 act</span><span style="font-family:Verdana;">uals S-EMG signals divided into three categories. The proposed approach was evaluated by the foll</span><span style="font-family:Verdana;">owing parameters: Compression Factor (CF), Signal to Noise Ratio (SN</span><span style="font-family:Verdana;">R), </span><span style="font-family:Verdana;">Percent Root mean square Difference (PRD), Mean Frequency Distortion (MFD) </span><span style="font-family:Verdana;">and the Mean Square Error (MSE). Simulation results show that the proposed coding algorithm outperforms some recently developed s-EMG compression algorithms.</span> 展开更多
关键词 S-EMG Compression MDWPT DCT arithmetic Coding Uniform Scalar Dead-Zone Quantizer (USDZQ)
下载PDF
High throughput VLSI architecture for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoding 被引量:1
14
作者 Kai HUANG De MA +2 位作者 Rong-jie YAN Hai-tong GE Xiao-lang YAN 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2013年第6期449-463,共15页
Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes... Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle. 展开更多
关键词 H.264/AVC Context-based adaptive binary arithmetic coding(CABAC) Decoder VLSI
原文传递
Design of new format for mass data compression 被引量:2
15
作者 QIN Jian-cheng BAI Zhong-ying 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2011年第1期121-128,共8页
In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format des... In the field of lossless compression, most kinds of traditional software have some shortages when they face the mass data. Their compressing abilities are limited by the data window size and the compressing format design. This paper presents a new design of compressing format named 'CZ format' which supports the data window size up to 4 GB and has some advantages in the mass data compression. Using this format, a compressing shareware named 'ComZip' is designed. The experiment results support that ComZip has better compression ratio than WinZip, Bzip2 and are compressed. And ComZip has the potential to beat 7-zip in WinRAR in most cases, especially when GBs or TBs of mass data future as the data window size exceeds 128 MB. 展开更多
关键词 mass data coding tossless compression LZ77/LZSS algorithm arithmetic coding
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部