Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios ev...In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.展开更多
This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each V...This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.展开更多
The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point ...The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point eventually. Neither periodic excited source nor self-excited oscillation exists in such nonlinear dynamic circuits, so its solution cannot be treated as the synthesis of multiharmonic. And the other is that the endless vibration of phase point is limited within certain range, moreover possesses character of sustained oscillation, namely the bounded nonlinear oscillation. It can persistently and repeatedly vibration after dynamic variable entering into steady state;moreover the motion of phase point will not approach infinite at last;system has not stable equilibrium point. The motional trajectory can be described by a bounded space curve. So far, the curve cannot be represented by concretely explicit parametric form in math. It cannot be expressed analytically by human. The chaos is a most universally common form of bounded nonlinear oscillation. A number of chaotic systems, such as Lorenz equation, Chua’s circuit and lossless system in modern times are some examples among thousands of chaotic equations. In this work, basic properties related to the bounded space curve will be comprehensively summarized by analyzing these examples.展开更多
In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without ...In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.展开更多
With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explore...With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For un...Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.展开更多
The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time m...The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.展开更多
Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization ...Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization (VQ) indices, but the encoding strategies adopted by their scheme expand the final codestream. This paper designs four embedding and encoding strategies to improve Wang et aL's scheme. The experiment result of the proposed scheme compared with that of the Wang et aL's scheme reduces the bit rates of the final codestream by 4.6% and raises the payload by 1.09% on average.展开更多
Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years...Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years is expensive, diffi-cult and requires sophisticated data compres-sion techniques. We propose a lossless com-pression method that makes use of the smoothness property of the images. In the first step, de-correlation of the given image is done using two efficient predictors. The two residue images are partitioned into non overlapping sub-images of size 4x4. At every instant one of the sub-images is selected and sent for coding. The sub-images with all zero pixels are identi-fied using one bit code. The remaining sub- images are coded by using base switching method. Special techniques are used to save the overhead information. Experimental results indicate an average compression ratio of 6.44 for the selected database.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of ...The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.展开更多
We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are ...We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>.展开更多
Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of...Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of capacitor units can be regarded as one of the efficient ways to achieve a wider voltage range,which has not yet been fully conquered due to the lack of rational designs of the electrode structure and integration technology.This study presents an alternatingly stacked assemble technology to conveniently fabricate compact aqueous hybrid integrated filtering capacitors on a large scale,in which a unit consists of rGO/MXene composite film as a negative electrode and PEDOT:PSS based film as a positive electrode.Benefiting from the synergistic effect of rGO and MXene components,and morphological characteristics of PEDOT:PSS,the capacitor unit exhibits outstanding AC line filtering with a large areal specific energy density of 1,015 μF V^(2)cm^(-2)(0.28 μW h cm^(-2)) at 120 Hz.After rational integration,the assembled capacitors present compact/lightweight configuration and lossless frequency response,as reflected by almost constant resistor-capacitor time constant of 0.2 ms and dissipation factor of 15% at120 Hz,identical to those of the single capacitor unit.Apart from standing alone steadily on a flower,a small volume(only 8.1 cm^(3)) of the integrated capacitor with 70 units connected in series achieves hundred-volts alternating current line filtering,which is superior to most reported filtering capacitors with sandwich configuration.This study provides insight into the fabrication and application of compact/ultralight filtering capacitors with lossless frequency response,and a wide range of operating voltage.展开更多
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金supported by the National Natural Science Foundation of China (No.62102046,62072249,62072056)JinWang,YongjunRen,and Jinbin Hu receive the grant,and the URLs to the sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province (No.2022JJ30618,2020JJ2029).
文摘In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.
基金This research work is partly supported by National Natural Science Foundation of China(61502009,61525203,61472235,U1636206,61572308)CSC Postdoctoral Project(201706505004)+2 种基金Anhui Provincial Natural Science Foundation(1508085SQF216)Key Program for Excellent Young Talents in Colleges and Universities of Anhui Province(gxyqZD2016011)Anhui university research and innovation training project for undergraduate students.
文摘This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.
文摘The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point eventually. Neither periodic excited source nor self-excited oscillation exists in such nonlinear dynamic circuits, so its solution cannot be treated as the synthesis of multiharmonic. And the other is that the endless vibration of phase point is limited within certain range, moreover possesses character of sustained oscillation, namely the bounded nonlinear oscillation. It can persistently and repeatedly vibration after dynamic variable entering into steady state;moreover the motion of phase point will not approach infinite at last;system has not stable equilibrium point. The motional trajectory can be described by a bounded space curve. So far, the curve cannot be represented by concretely explicit parametric form in math. It cannot be expressed analytically by human. The chaos is a most universally common form of bounded nonlinear oscillation. A number of chaotic systems, such as Lorenz equation, Chua’s circuit and lossless system in modern times are some examples among thousands of chaotic equations. In this work, basic properties related to the bounded space curve will be comprehensively summarized by analyzing these examples.
基金supported in part by China"973"Program under Grant No.2014CB340303
文摘In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.
文摘With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.
基金This project was supported by the National Natural Science Foundation of China (60172045).
文摘The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.
基金supported by the National Science Council,Taiwan under Grant No.NSC 99-2221-E-324-040-MY2
文摘Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization (VQ) indices, but the encoding strategies adopted by their scheme expand the final codestream. This paper designs four embedding and encoding strategies to improve Wang et aL's scheme. The experiment result of the proposed scheme compared with that of the Wang et aL's scheme reduces the bit rates of the final codestream by 4.6% and raises the payload by 1.09% on average.
文摘Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years is expensive, diffi-cult and requires sophisticated data compres-sion techniques. We propose a lossless com-pression method that makes use of the smoothness property of the images. In the first step, de-correlation of the given image is done using two efficient predictors. The two residue images are partitioned into non overlapping sub-images of size 4x4. At every instant one of the sub-images is selected and sent for coding. The sub-images with all zero pixels are identi-fied using one bit code. The remaining sub- images are coded by using base switching method. Special techniques are used to save the overhead information. Experimental results indicate an average compression ratio of 6.44 for the selected database.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.
文摘The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.
文摘We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>.
基金supported by the NSFC(21805072,22075019,22035005)the National Key R&D Program of China(2017YFB1104300)。
文摘Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of capacitor units can be regarded as one of the efficient ways to achieve a wider voltage range,which has not yet been fully conquered due to the lack of rational designs of the electrode structure and integration technology.This study presents an alternatingly stacked assemble technology to conveniently fabricate compact aqueous hybrid integrated filtering capacitors on a large scale,in which a unit consists of rGO/MXene composite film as a negative electrode and PEDOT:PSS based film as a positive electrode.Benefiting from the synergistic effect of rGO and MXene components,and morphological characteristics of PEDOT:PSS,the capacitor unit exhibits outstanding AC line filtering with a large areal specific energy density of 1,015 μF V^(2)cm^(-2)(0.28 μW h cm^(-2)) at 120 Hz.After rational integration,the assembled capacitors present compact/lightweight configuration and lossless frequency response,as reflected by almost constant resistor-capacitor time constant of 0.2 ms and dissipation factor of 15% at120 Hz,identical to those of the single capacitor unit.Apart from standing alone steadily on a flower,a small volume(only 8.1 cm^(3)) of the integrated capacitor with 70 units connected in series achieves hundred-volts alternating current line filtering,which is superior to most reported filtering capacitors with sandwich configuration.This study provides insight into the fabrication and application of compact/ultralight filtering capacitors with lossless frequency response,and a wide range of operating voltage.