期刊文献+
共找到635篇文章
< 1 2 32 >
每页显示 20 50 100
Quantitative Comparative Study of the Performance of Lossless Compression Methods Based on a Text Data Model
1
作者 Namogo Silué Sié Ouattara +1 位作者 Mouhamadou Dosso Alain Clément 《Open Journal of Applied Sciences》 2024年第7期1944-1962,共19页
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform... Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding. 展开更多
关键词 Arithmetic Coding BWT compression Ratio Comparative Study compression Techniques Shannon-Fano HUFFMAN lossless compression LZW PERFORMANCE REDUNDANCY RLE Text Data Tunstall
下载PDF
A LOSSLESS COMPRESSION ALGORITHM OF REMOTE SENSING IMAGE FOR SPACE APPLICATIONS 被引量:3
2
作者 Sui Yuping Yang Chengyu +3 位作者 Liu Yanjun Wang Jun Wei Zhonghui He Xin 《Journal of Electronics(China)》 2008年第5期647-651,共5页
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ... A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications. 展开更多
关键词 Remote sensing image lossless compression Rice entropy coder Integer Discrete Wavelet Transform (DWT)
下载PDF
Second Generation Wavelet Applied to Lossless Compression Coding of Image 被引量:1
3
作者 Yan Tang Yu-long Mo 《Advances in Manufacturing》 2000年第3期225-229,共5页
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ... In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding. 展开更多
关键词 wavelet transform integer wavelet image compression lossless coding
下载PDF
Lossless Compression of SKA Data Sets 被引量:1
4
作者 Karthik Rajeswaran Simon Winberg 《Communications and Network》 2013年第4期369-378,共10页
With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explore... With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results. 展开更多
关键词 SQUARE Kilometre ARRAY lossless compression HDF5
下载PDF
A Global-Scale Image Lossless Compression Method Based on QTM Pixels 被引量:1
5
作者 SUN Wen-bin ZHAO Xue-sheng 《Journal of China University of Mining and Technology》 2006年第4期466-469,共4页
In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbo... In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model. 展开更多
关键词 Quaternary Triangular Mesh lossless compression predictive model image entropy
下载PDF
Fast lossless color image compression method using perceptron
6
作者 JiaKebin ZhangYanhua ZhuangXinyue 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2004年第2期190-196,共7页
The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time m... The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice. 展开更多
关键词 lossless compression PERCEPTRON prediction model correlation.
下载PDF
Lossless compression of digital mammography using base switching method
7
作者 Ravi kumar Mulemajalu Shivaprakash Koliwad 《Journal of Biomedical Science and Engineering》 2009年第5期336-344,共9页
Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years... Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years is expensive, diffi-cult and requires sophisticated data compres-sion techniques. We propose a lossless com-pression method that makes use of the smoothness property of the images. In the first step, de-correlation of the given image is done using two efficient predictors. The two residue images are partitioned into non overlapping sub-images of size 4x4. At every instant one of the sub-images is selected and sent for coding. The sub-images with all zero pixels are identi-fied using one bit code. The remaining sub- images are coded by using base switching method. Special techniques are used to save the overhead information. Experimental results indicate an average compression ratio of 6.44 for the selected database. 展开更多
关键词 lossless compression MAMMOGRAPHY IMAGE Prediction STORAGE SPACE
下载PDF
Perceptually Lossless Compression for Mastcam Multispectral Images: A Comparative Study
8
作者 Chiman Kwan Jude Larkin 《Journal of Signal and Information Processing》 2019年第4期139-166,共28页
The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of ... The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression. 展开更多
关键词 Perceptually lossless compression Mastcam Images JPEG J2K X264 X265 Daala MULTISPECTRAL PCA VIDEO compression
下载PDF
A New Method Which Combines Arithmetic Coding with RLE for Lossless Image Compression
9
作者 Med Karim Abdmouleh Atef Masmoudi Med Salim Bouhlel 《Journal of Software Engineering and Applications》 2012年第1期41-44,共4页
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded... This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders. 展开更多
关键词 Adaptive ARITHMETIC CODING Static ARITHMETIC CODING ARITHMETIC CODING lossless compression Image RUN Length ENCODING
下载PDF
New Results in Perceptually Lossless Compression of Hyperspectral Images
10
作者 Chiman Kwan Jude Larkin 《Journal of Signal and Information Processing》 2019年第3期96-124,共29页
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf... Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination. 展开更多
关键词 Hyperspectral Images (HSI) compression Perceptually lossless Principal Component Analysis (PCA) Human Visual System (HVS) PSNR SSIM JPEG-2000 X264 X265 Daala
下载PDF
THE TECHNIQUE OF QUASI-LOSSLESS COMPRESSION OF THE REMOTE SENSING IMAGE
11
作者 hu qingwu 《Geo-Spatial Information Science》 2001年第1期50-55,共6页
In this paper,the technique of quasi_lossless compression based on the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removin... In this paper,the technique of quasi_lossless compression based on the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi_lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi_lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable. 展开更多
关键词 quasi-lossless compression image RESTORATION FIDELITY PEAK value signal-noise ratio edge CURVE line SPREAD function
下载PDF
Novel Lossless Compression Method Based on the Fourier Transform to Approximate the Kolmogorov Complexity of Elementary Cellular Automata
12
作者 Mohammed Terry-Jack 《Journal of Software Engineering and Applications》 2022年第10期359-383,共25页
We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are ... We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>. 展开更多
关键词 Fast Fourier Transform lossless compression Elementary Cellular Automata Algorithmic Information Theory Kolmogorov Complexity
下载PDF
Seismic data compression based on integer wavelet transform 被引量:1
13
作者 WANG Xi-zhen(王喜珍) TENG Yun-tian(滕云田) +1 位作者 GAO Meng-tan(高孟潭) JIANG Hui(姜慧) 《Acta Seismologica Sinica(English Edition)》 CSCD 2004年第z1期123-128,共6页
Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied.... Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data. 展开更多
关键词 lossless compression integer wavelet transform lifting scheme biorthogonal wavelet
下载PDF
Formal Photograph Compression Algorithm Based on Object Segmentation 被引量:1
14
作者 Li Zhu Guo-You Wang Chen Wang 《International Journal of Automation and computing》 EI 2008年第3期276-283,共8页
Small storage space for photographs in formal documents is increasingly necessary in today's needs for huge amounts of data communication and storage. Traditional compression algorithms do not sufficiently utilize th... Small storage space for photographs in formal documents is increasingly necessary in today's needs for huge amounts of data communication and storage. Traditional compression algorithms do not sufficiently utilize the distinctness of formal photographs. That is, the object is an image of the human head, and the background is in unicolor. Therefore, the compression is of low efficiency and the image after compression is still space-consuming. This paper presents an image compression algorithm based on object segmentation for practical high-efficiency applications. To achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. The areas of the human head and its background are compressed separately to reduce the coding redundancy of the background. Two methods, lossless image contour coding based on differential chain, and modified set partitioning in hierarchical trees (SPIHT) algorithm of arbitrary shape, are discussed in detail. The results of experiments show that when bit per pixel (bpp)is equal to 0.078, peak signal-to-noise ratio (PSNR) of reconstructed photograph will exceed the standard of SPIHT by nearly 4dB. 展开更多
关键词 Image compression object segmentation lossless image contour coding differential chain set partitioning in hierarchical trees (SPIHT) coding of arbitrarily shaped object.
下载PDF
An adaptive hybrid DPCM/DCT coding approach forhigh quality image compression
15
作者 赵德斌 唐剑琪 孙晓艳 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 1999年第3期77-81,共5页
Discrete asine transform (DCT) is the key technique in JPEG and MPW, chch dds with tw bforkby block. HoWever, this methed is no sultabe for the blocks conaining many edges for high quality image reconstruc-tion in Par... Discrete asine transform (DCT) is the key technique in JPEG and MPW, chch dds with tw bforkby block. HoWever, this methed is no sultabe for the blocks conaining many edges for high quality image reconstruc-tion in Particular. An adaptive hybrid DPCM/DCT edng mehed is proposed to solve this problem. For each block,the ds dethetor botches to DPCM or gy ceder autoInaticthe depewhng upon quality requrement. The edge blocksare coded by DPCM coder that adaptively Selects a predictor from the given set, which results in minimum predictionerror, and the hadues obained are ced with fuce ed. For non-edg bforks, us, mlength nd vallabe lengthcoding(VLC) are applied. Experimental results showed the Proposed algorithm ouperforms baseline JPEG and JPEGlossless mode both on compression ratio and decoding run-time at the hit rates from 1 to 4 approximately. 展开更多
关键词 EDGE direction lossless IMAGE compression LOSSY IMAGE compression
下载PDF
An Approach to Integer Wavelet Transform for Medical Image Compression in PACS
16
作者 YANG Yan ZHANG Dong 《Wuhan University Journal of Natural Sciences》 CAS 2000年第2期204-206,共3页
We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is gen... We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is generated, which has the similar features with the corresponding biorthogonal wavelet transform. Experimental results of the method based on integer wavelet transform are given to show better performance and great applicable potentiality in medical image compression. 展开更多
关键词 Key words integer wavelet transform lifting scheme lossless compression PACS
下载PDF
Improving Compression of Short Messages
17
作者 Paul Gardner-Stephen Andrew Bettison +3 位作者 Romana Challans Jennifer Hampton Jeremy Lakeman Corey Wallis 《International Journal of Communications, Network and System Sciences》 2013年第12期497-504,共8页
Compression of short text strings, such as the GSM Short Message Service (SMS) and Twitter messages, has received relatively little attention compared to the compression of longer texts. This is not surprising given t... Compression of short text strings, such as the GSM Short Message Service (SMS) and Twitter messages, has received relatively little attention compared to the compression of longer texts. This is not surprising given that for typical cellular and internet-based networks, the cost of compression probably outweighs the cost of delivering uncompressed messages. However, this is not necessarily true in the case where the cost of data transport is high, for example, where satellite back-haul is involved, or on bandwidth-starved mobile mesh networks, such as the mesh networks for disaster relief, rural, remote and developing contexts envisaged by the Serval Project [1-4]. This motivated the development of a state-of-art text compression algorithm that could be used to compress mesh-based short-message traffic, culminating in the development of the stats3 SMS compression scheme described in this paper. Stats3 uses word frequency and 3rd-order letter statistics embodied in a pre-constructed dictionary to affect lossless compression of short text messages. This scheme shows that our scheme compressing text messages typically reduces messages to less than half of their original size, and in so doing substantially outperforms all public SMS compression systems, while also matching or exceeding the marketing claims of the commercial options known to the authors. We also outline approaches for future work that has the potential to further improve the performance and practical utility of stats3. 展开更多
关键词 lossless TEXT compression SMS TWITTER ARITHMETIC Coding Mobile Cellular MESH Network
下载PDF
Compression of ECG Signals Based on DWT and Exploiting the Correlation between ECG Signal Samples
18
作者 Mohammed M. Abo-Zahhad Tarik K. Abdel-Hamid Abdelfatah M. Mohamed 《International Journal of Communications, Network and System Sciences》 2014年第1期53-70,共18页
This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code M... This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results. 展开更多
关键词 ECG Signal Segmentation lossless and LOSSY compression Techniques Discrete Wavelet Transform Energy PACKING Efficiency RUN-LENGTH Coding
下载PDF
SAR Image Compression Using Integer to Integer Transformations, Dimensionality Reduction, and High Correlation Modeling
19
作者 Sergey Voronin 《Journal of Computer and Communications》 2022年第2期19-32,共14页
In this document, we present new techniques for near-lossless and lossy compression of SAR imagery saved in PNG and binary formats of magnitude and phase data based on the application of transforms, dimensionality red... In this document, we present new techniques for near-lossless and lossy compression of SAR imagery saved in PNG and binary formats of magnitude and phase data based on the application of transforms, dimensionality reduction methods, and lossless compression. In particular, we discuss the use of blockwise integer to integer transforms, subsequent application of a dimensionality reduction method, and Burrows-Wheeler based lossless compression for the PNG data and the use of high correlation based modeling of sorted transform coefficients for the raw floating point magnitude and phase data. The gains exhibited are substantial over the application of different lossless methods directly on the data and competitive with existing lossy approaches. The methods presented are effective for large scale processing of similar data formats as they are heavily based on techniques which scale well on parallel architectures. 展开更多
关键词 SAR Imagery Integer-to-Integer Transforms Dimensionality Reduction High Correlation Modeling Lossy and lossless compression
下载PDF
Lossless data hiding using bit-depth embedding for JPEG2000 compressed bit-stream
20
作者 Shogo Ohyama Michiharu Niimi Kazumi Yamawaki Hideki Noda 《通讯和计算机(中英文版)》 2009年第2期35-39,共5页
关键词 二进制图像 JPEG2000压缩数据 图像处理 计算机技术
下载PDF
上一页 1 2 32 下一页 到第
使用帮助 返回顶部