Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized r...An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.展开更多
Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed ir...Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened com...Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened communication overhead,and escalated storage costs,have significantly constrained the broad-scale application of blockchain.This paper introduces a novel Encode-and CRT-based Scalability Scheme(ECSS),meticulously refined to enhance both block broadcasting and storage.Primarily,ECSS categorizes nodes into distinct domains,thereby reducing the network diameter and augmenting transmission efficiency.Secondly,ECSS streamlines block transmission through a compact block protocol and robust RS coding,which not only reduces the size of broadcasted blocks but also ensures transmission reliability.Finally,ECSS utilizes the Chinese remainder theorem,designating the block body as the compression target and mapping it to multiple modules to achieve efficient storage,thereby alleviating the storage burdens on nodes.To evaluate ECSS’s performance,we established an experimental platformand conducted comprehensive assessments.Empirical results demonstrate that ECSS attains superior network scalability and stability,reducing communication overhead by an impressive 72% and total storage costs by a substantial 63.6%.展开更多
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.展开更多
An efficient compression coding system for motion images is presented. A macro-block matchingtechnique based on tile correlation betweell motion vectors is applied to the system. This technique effectivelyimproves the...An efficient compression coding system for motion images is presented. A macro-block matchingtechnique based on tile correlation betweell motion vectors is applied to the system. This technique effectivelyimproves the accuracy and the speed of motion estimation, and the compression ratio. A compression ratio of 29 isattainable without visible image degradation.展开更多
First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantizat...First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.展开更多
To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra...To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra-frame and inter-frame coding modes.The intra-frame coding is a rate-distortion optimized adaptive block size that can be also used for the compression of a single screen image.The inter-frame coding utilizes hierarchical group of pictures(GOP) structure to improve system performance during random accesses and fast-backward scans.Experimental results demonstrate that the proposed CABHG method has approximately 47%-48% higher compression ratio and 46%-53% lower CPU utilization than professional screen image sequence codecs such as TechSmith Ensharpen codec and Sorenson 3 codec.Compared with general video codecs such as H.264 codec,XviD MPEG-4 codec and Apple's Animation codec,CABHG also shows 87%-88% higher compression ratio and 64%-81% lower CPU utilization than these general video codecs.展开更多
In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and ...In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.展开更多
In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal ...In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal coding. Hyperspectral image date cube was first translated by 3-D wavelet and the 3-D fractal compression ceding was applied to lowest frequency subband. The remaining coefficients of higher frequency sub-bands were encoding by 3-D improved SPIHT. We used the block set instead of the hierarchical trees to enhance SPIHT's flexibility. The classical eight kinds of affme transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. The new compression method had been tested on MATLAB. The experiment results indicate that we can gain high compression ratios and the information loss is acceptable.展开更多
To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advan...To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.展开更多
In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and entropy coding of image compression directly on the focal plane. The design is based on predictive coding for image deco...In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and entropy coding of image compression directly on the focal plane. The design is based on predictive coding for image decorrelation. The predictions are performed in analog domain by 2×2 pixel units. Both the prediction residuals and original pixel values are quantized and encoded in parallel. Since the residuals have a peak distribution around zero,the output codewords can be replaced by the valid part of the residuals' binary mode. The compressed bit stream is accessible directly at the output of CIS without extra disposition. Simulation results show that the proposed approach achieves a compression rate of 2. 2 and PSNR of 51 on different test images.展开更多
A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the...A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
An irregular segmented region coding algorithm based on pulse coupled neural network(PCNN) is presented. PCNN has the property of pulse-coupled and changeable threshold, through which these adjacent pixels with approx...An irregular segmented region coding algorithm based on pulse coupled neural network(PCNN) is presented. PCNN has the property of pulse-coupled and changeable threshold, through which these adjacent pixels with approximate gray values can be activated simultaneously. One can draw a conclusion that PCNN has the advantage of realizing the regional segmentation, and the details of original image can be achieved by the parameter adjustment of segmented images, and at the same time, the trivial segmented regions can be avoided. For the better approximation of irregular segmented regions, the Gram-Schmidt method, by which a group of orthonormal basis functions is constructed from a group of linear independent initial base functions, is adopted. Because of the orthonormal reconstructing method, the quality of reconstructed image can be greatly improved and the progressive image transmission will also be possible.展开更多
In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroB...In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.展开更多
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic ...The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.展开更多
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
文摘An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.
文摘Point cloud compression is critical to deploy 3D representation of the physical world such as 3D immersive telepresence,autonomous driving,and cultural heritage preservation.However,point cloud data are distributed irregularly and discontinuously in spatial and temporal domains,where redundant unoccupied voxels and weak correlations in 3D space make achieving efficient compression a challenging problem.In this paper,we propose a spatio-temporal context-guided algorithm for lossless point cloud geometry compression.The proposed scheme starts with dividing the point cloud into sliced layers of unit thickness along the longest axis.Then,it introduces a prediction method where both intraframe and inter-frame point clouds are available,by determining correspondences between adjacent layers and estimating the shortest path using the travelling salesman algorithm.Finally,the few prediction residual is efficiently compressed with optimal context-guided and adaptive fastmode arithmetic coding techniques.Experiments prove that the proposed method can effectively achieve low bit rate lossless compression of point cloud geometric information,and is suitable for 3D point cloud compression applicable to various types of scenes.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘Blockchain technology has witnessed a burgeoning integration into diverse realms of economic and societal development.Nevertheless,scalability challenges,characterized by diminished broadcast efficiency,heightened communication overhead,and escalated storage costs,have significantly constrained the broad-scale application of blockchain.This paper introduces a novel Encode-and CRT-based Scalability Scheme(ECSS),meticulously refined to enhance both block broadcasting and storage.Primarily,ECSS categorizes nodes into distinct domains,thereby reducing the network diameter and augmenting transmission efficiency.Secondly,ECSS streamlines block transmission through a compact block protocol and robust RS coding,which not only reduces the size of broadcasted blocks but also ensures transmission reliability.Finally,ECSS utilizes the Chinese remainder theorem,designating the block body as the compression target and mapping it to multiple modules to achieve efficient storage,thereby alleviating the storage burdens on nodes.To evaluate ECSS’s performance,we established an experimental platformand conducted comprehensive assessments.Empirical results demonstrate that ECSS attains superior network scalability and stability,reducing communication overhead by an impressive 72% and total storage costs by a substantial 63.6%.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
文摘An efficient compression coding system for motion images is presented. A macro-block matchingtechnique based on tile correlation betweell motion vectors is applied to the system. This technique effectivelyimproves the accuracy and the speed of motion estimation, and the compression ratio. A compression ratio of 29 isattainable without visible image degradation.
文摘First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.
基金Project(60873230) supported by the National Natural Science Foundation of China
文摘To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra-frame and inter-frame coding modes.The intra-frame coding is a rate-distortion optimized adaptive block size that can be also used for the compression of a single screen image.The inter-frame coding utilizes hierarchical group of pictures(GOP) structure to improve system performance during random accesses and fast-backward scans.Experimental results demonstrate that the proposed CABHG method has approximately 47%-48% higher compression ratio and 46%-53% lower CPU utilization than professional screen image sequence codecs such as TechSmith Ensharpen codec and Sorenson 3 codec.Compared with general video codecs such as H.264 codec,XviD MPEG-4 codec and Apple's Animation codec,CABHG also shows 87%-88% higher compression ratio and 64%-81% lower CPU utilization than these general video codecs.
文摘In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.
基金National Natural Science Foundation of China (No.60975084)
文摘In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal coding. Hyperspectral image date cube was first translated by 3-D wavelet and the 3-D fractal compression ceding was applied to lowest frequency subband. The remaining coefficients of higher frequency sub-bands were encoding by 3-D improved SPIHT. We used the block set instead of the hierarchical trees to enhance SPIHT's flexibility. The classical eight kinds of affme transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. The new compression method had been tested on MATLAB. The experiment results indicate that we can gain high compression ratios and the information loss is acceptable.
基金supported by ZTE Industry-University-Institute Cooperation Funds.
文摘To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.
基金Supported by the National Natural Science Foundation of China(No.61036004)Tianjin Research Program of Application Foundation and Advanced Technology(No.13JCQNJC00600)
文摘In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and entropy coding of image compression directly on the focal plane. The design is based on predictive coding for image decorrelation. The predictions are performed in analog domain by 2×2 pixel units. Both the prediction residuals and original pixel values are quantized and encoded in parallel. Since the residuals have a peak distribution around zero,the output codewords can be replaced by the valid part of the residuals' binary mode. The compressed bit stream is accessible directly at the output of CIS without extra disposition. Simulation results show that the proposed approach achieves a compression rate of 2. 2 and PSNR of 51 on different test images.
文摘A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
基金National Natural Science Foundation of China(60572011) 985 Special Study Project(LZ85 -231 -582627)
文摘An irregular segmented region coding algorithm based on pulse coupled neural network(PCNN) is presented. PCNN has the property of pulse-coupled and changeable threshold, through which these adjacent pixels with approximate gray values can be activated simultaneously. One can draw a conclusion that PCNN has the advantage of realizing the regional segmentation, and the details of original image can be achieved by the parameter adjustment of segmented images, and at the same time, the trivial segmented regions can be avoided. For the better approximation of irregular segmented regions, the Gram-Schmidt method, by which a group of orthonormal basis functions is constructed from a group of linear independent initial base functions, is adopted. Because of the orthonormal reconstructing method, the quality of reconstructed image can be greatly improved and the progressive image transmission will also be possible.
文摘In this paper, we propose a three-dimensional Set Partitioned Embedded ZeroBlock Coding (3D SPEZBC) lossy-to-lossless compression algorithm for hyperspectral image which is an improved three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. The algorithm adopts the 3D integer wavelet packet transform proposed by Xiong et al. to decorrelate, the set-based partitioning zeroblock coding to process bitplane coding and the con-text-based adaptive arithmetic coding for further entropy coding. The theoretical analysis and experimental results demonstrate that 3D SPEZBC not only provides the same excellent compression performances as 3D EZBC, but also reduces the memory requirement compared with 3D EZBC. For achieving good coding performance, the diverse wave-let filters and unitary scaling factors are compared and evaluated, and the best choices were given. In comparison with several state-of-the-art wavelet coding algorithms, the proposed algorithm provides better compression performance and unsupervised classification accuracy.
基金supported by the Shenzhen Government R&D Project under Grant No.JC200903160361A
文摘The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.