Image compression consists of two main parts: encoding and decoding. One of the important problems of the fractal theory is the long encoding implementation time, which hindered the acceptance of fractal image compres...Image compression consists of two main parts: encoding and decoding. One of the important problems of the fractal theory is the long encoding implementation time, which hindered the acceptance of fractal image compression as a practical method. The long encoding time results from the need to perform a large number of domain-range matches, the total encoding time is the product of the number of matches and the time to perform each match. In order to improve encoding speed, a hybrid method combining features extraction and self-organization network has been provided, which is based on the feature extraction approach the comparison pixels by pixels between the feature of range blocks and domains blocks. The efficiency of the new method was been proved by examples.展开更多
Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) a...Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward.The new algorithm uses RLE as the basic processing unit,converts the label merging of connected RLE into sets grouping in accordance with equivalence relation,and uses the union-find sets which is the realization method of sets grouping to solve the label merging of connected RLE.And the label merging procedure has been optimized:the union operation has been modified by adding the "weighted rule" to avoid getting a degenerated-tree,and the "path compression" has been adopted when implementing the find operation,then the time complexity of label merging is O(nα(n)).The experiments show that the new algorithm can label the connected components of any shapes very quickly and exactly,save more memory,and facilitate the subsequent image analysis.展开更多
A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can...A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.展开更多
To solve the real-time transmission problem of displacement fields in digital image correlation,two compression coding algorithms based on a discrete cosine transform(DCT)and discrete wavelet transform(DWT)are propose...To solve the real-time transmission problem of displacement fields in digital image correlation,two compression coding algorithms based on a discrete cosine transform(DCT)and discrete wavelet transform(DWT)are proposed.Based on the Joint Photographic Experts Group(JPEG)and JPEG 2000 standards,new non-integer and integer quantizations are proposed in the quantization procedure of compression algorithms.Displacement fields from real experiments were used to evaluate the compression ratio and computational time of the algorithm.The results show that the compression ratios of the DCT-based algorithm are mostly below 10%,which are much less than that of the DWT-based algorithm,and the computational speed is also significantly higher than that of the latter.These findings prove the algorithm s effectiveness in real-time displacement field wireless transmission.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
Based on the mechanisms underlying the performance of fractal and Discrete Wavelet Transform(DWT), one method using fractal-based self-quantization coding way to code different subband coefficients of DWT is presented...Based on the mechanisms underlying the performance of fractal and Discrete Wavelet Transform(DWT), one method using fractal-based self-quantization coding way to code different subband coefficients of DWT is presented. Within this method finer coefficients are fractal encoded according to the successive coarser ones. Self-similarities inherent between parent and their children at the same spatial location of the adjacent scales of similar orientation are exploited to predict variation of information across wavelet scales. On the other hand, with respect to Human Visual System(HVS) model, we assign different error thresholds to different decomposition scales, and different shape of range blocks to different orientations of the same scale, by which the perceptually lossless high compression ratio can be achieved and the matching processing can be quickened dramatically.展开更多
目的探讨压缩感知结合层面编码金属伪影校正(compressed sensing-slice-encoding metal artifact correction,CS-SEMAC)技术用于脊柱金属植入物术后MRI的应用价值。材料与方法比较招募的35例脊柱金属植入物术后患者3.0 T MRI矢状位CS-SE...目的探讨压缩感知结合层面编码金属伪影校正(compressed sensing-slice-encoding metal artifact correction,CS-SEMAC)技术用于脊柱金属植入物术后MRI的应用价值。材料与方法比较招募的35例脊柱金属植入物术后患者3.0 T MRI矢状位CS-SEMAC序列、高带宽(high bandwidth,HBW)序列和水脂分离(Dixon)三种序列在金属植入物伪影面积、椎体信噪比(signal-to-noise ratio,SNR)、图像质量、图像清晰度、脂肪抑制效果以及植入物周围解剖结构的可见性方面的差异。结果CS-SEMAC在T1、T2矢状位图像上金属伪影面积分别为(15.45±6.84)、(22.23±9.76)cm^(2),显著低于其他两种序列,差异具有统计学意义(P<0.001);三种序列在T2抑脂矢状面图像上的SNR两两比较显示:HBW序列椎体SNR显著高于其他两种序列,Dixon序列椎体SNR显著低于其他两种序列,CS-SEMAC序列椎体SNR低于HBW序列,高于Dixon序列,差异均有统计学意义(P<0.001);在图像清晰度上,T2WI-tirm-CS-SEMAC序列评分低于其他两种序列,差异具有统计学意义(P<0.001);T2WI-tirm-CS-SEMAC序列在图像质量和脂肪抑制效果方面评分显著优于其他两种序列,差异具有统计学意义(P<0.001);并且CS-SEMAC序列相较于其他两种序列更能清晰显示植入物周围椎体、椎弓根、椎间孔及神经根,差异具有统计学意义(P<0.001)。结论CS-SEMAC序列相比于HBW、Dixon序列能够有效减少植入物周围的金属伪影,并且能显著提高T2抑脂序列的图像质量和脂肪抑制效果,虽然在T2抑脂上金属植入物邻近椎体SNR相比HBW序列有所下降,图像比HBW和Dixon图像略模糊,但是椎体周围关键解剖结构的可见度明显提升,对脊柱术后解剖结构的显示有一定优势。展开更多
A representation method using the non-symmetry and anti-packing model (NAM) for data compression of binary images is presented. The NAM representation algorithm is compared with the popular linear quadtree and run l...A representation method using the non-symmetry and anti-packing model (NAM) for data compression of binary images is presented. The NAM representation algorithm is compared with the popular linear quadtree and run length encoding algorithms. Theoretical and experimental results show that the algorithm has a higher compression ratio for both Iossy and Iossless cases of binary images and better reconstructed quality for the Iossy case.展开更多
Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compr...Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.展开更多
文摘Image compression consists of two main parts: encoding and decoding. One of the important problems of the fractal theory is the long encoding implementation time, which hindered the acceptance of fractal image compression as a practical method. The long encoding time results from the need to perform a large number of domain-range matches, the total encoding time is the product of the number of matches and the time to perform each match. In order to improve encoding speed, a hybrid method combining features extraction and self-organization network has been provided, which is based on the feature extraction approach the comparison pixels by pixels between the feature of range blocks and domains blocks. The efficiency of the new method was been proved by examples.
文摘Based on detailed analysis of advantages and disadvantages of the existing connected-component labeling (CCL) algorithm,a new algorithm for binary connected components labeling based on run-length encoding (RLE) and union-find sets has been put forward.The new algorithm uses RLE as the basic processing unit,converts the label merging of connected RLE into sets grouping in accordance with equivalence relation,and uses the union-find sets which is the realization method of sets grouping to solve the label merging of connected RLE.And the label merging procedure has been optimized:the union operation has been modified by adding the "weighted rule" to avoid getting a degenerated-tree,and the "path compression" has been adopted when implementing the find operation,then the time complexity of label merging is O(nα(n)).The experiments show that the new algorithm can label the connected components of any shapes very quickly and exactly,save more memory,and facilitate the subsequent image analysis.
基金the National Natural Science Foundation of China (60602057)the NaturalScience Foundation of Chongqing Science and Technology Commission (2006BB2373).
文摘A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.
基金The National Natural Science Foundation of China(No.11827801,11902074)。
文摘To solve the real-time transmission problem of displacement fields in digital image correlation,two compression coding algorithms based on a discrete cosine transform(DCT)and discrete wavelet transform(DWT)are proposed.Based on the Joint Photographic Experts Group(JPEG)and JPEG 2000 standards,new non-integer and integer quantizations are proposed in the quantization procedure of compression algorithms.Displacement fields from real experiments were used to evaluate the compression ratio and computational time of the algorithm.The results show that the compression ratios of the DCT-based algorithm are mostly below 10%,which are much less than that of the DWT-based algorithm,and the computational speed is also significantly higher than that of the latter.These findings prove the algorithm s effectiveness in real-time displacement field wireless transmission.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
文摘Based on the mechanisms underlying the performance of fractal and Discrete Wavelet Transform(DWT), one method using fractal-based self-quantization coding way to code different subband coefficients of DWT is presented. Within this method finer coefficients are fractal encoded according to the successive coarser ones. Self-similarities inherent between parent and their children at the same spatial location of the adjacent scales of similar orientation are exploited to predict variation of information across wavelet scales. On the other hand, with respect to Human Visual System(HVS) model, we assign different error thresholds to different decomposition scales, and different shape of range blocks to different orientations of the same scale, by which the perceptually lossless high compression ratio can be achieved and the matching processing can be quickened dramatically.
文摘目的探讨基于联合压缩感知(Compressed Sensing,CS)平面回波成像(Echo Planer Imaging,EPI)的动脉自旋标记(Arterial Spin Labeling,ASL)技术,即EPICS-ASL序列在脑灌注成像中的应用价值,并与传统EPI-ASL序列进行比较。方法前瞻性收集30例志愿者行EPICS-ASL和传统EPI-ASL序列扫描。在ASL原始图上选取基底节、灰质、白质、脑干和小脑作为感兴趣区(Region of Interest,ROI),测量并计算各ROI的信噪比(Signal to Noise Ratio,SNR)和灰质/白质对比噪声比(Contrast to Noise Ratio,CNR灰质/白质);2名医师独立双盲采用4分法主观评估两组图像质量。采用配对t检验比较两组图像的SNR和CNR灰质/白质,采用秩和检验比较两组图像主观评分结果。结果客观评价,EPICS-ASL组各ROI的SNR和CNR灰质/白质均优于EPI-ASL组各ROI的SNR和CNR灰质/白质(均P<0.001);主观评价,EPICS-ASL组的图像质量评分也优于EPI-ASL组(均P<0.001)。结论EPICS-ASL序列显著优于传统EPI-ASL序列的图像SNR和CNR,图像质量更高,具有进一步临床应用的潜力。
文摘目的探讨压缩感知结合层面编码金属伪影校正(compressed sensing-slice-encoding metal artifact correction,CS-SEMAC)技术用于脊柱金属植入物术后MRI的应用价值。材料与方法比较招募的35例脊柱金属植入物术后患者3.0 T MRI矢状位CS-SEMAC序列、高带宽(high bandwidth,HBW)序列和水脂分离(Dixon)三种序列在金属植入物伪影面积、椎体信噪比(signal-to-noise ratio,SNR)、图像质量、图像清晰度、脂肪抑制效果以及植入物周围解剖结构的可见性方面的差异。结果CS-SEMAC在T1、T2矢状位图像上金属伪影面积分别为(15.45±6.84)、(22.23±9.76)cm^(2),显著低于其他两种序列,差异具有统计学意义(P<0.001);三种序列在T2抑脂矢状面图像上的SNR两两比较显示:HBW序列椎体SNR显著高于其他两种序列,Dixon序列椎体SNR显著低于其他两种序列,CS-SEMAC序列椎体SNR低于HBW序列,高于Dixon序列,差异均有统计学意义(P<0.001);在图像清晰度上,T2WI-tirm-CS-SEMAC序列评分低于其他两种序列,差异具有统计学意义(P<0.001);T2WI-tirm-CS-SEMAC序列在图像质量和脂肪抑制效果方面评分显著优于其他两种序列,差异具有统计学意义(P<0.001);并且CS-SEMAC序列相较于其他两种序列更能清晰显示植入物周围椎体、椎弓根、椎间孔及神经根,差异具有统计学意义(P<0.001)。结论CS-SEMAC序列相比于HBW、Dixon序列能够有效减少植入物周围的金属伪影,并且能显著提高T2抑脂序列的图像质量和脂肪抑制效果,虽然在T2抑脂上金属植入物邻近椎体SNR相比HBW序列有所下降,图像比HBW和Dixon图像略模糊,但是椎体周围关键解剖结构的可见度明显提升,对脊柱术后解剖结构的显示有一定优势。
基金Supported by the National High-Tech Research and Development (863) Program of China (No. 2006AA04Z211)
文摘A representation method using the non-symmetry and anti-packing model (NAM) for data compression of binary images is presented. The NAM representation algorithm is compared with the popular linear quadtree and run length encoding algorithms. Theoretical and experimental results show that the algorithm has a higher compression ratio for both Iossy and Iossless cases of binary images and better reconstructed quality for the Iossy case.
文摘Aiming at the characteristics of the seismic exploration signals, the paper studies the image coding technology, the coding standard and algorithm, brings forward a new scheme of admixing coding for seismic data compression. Based on it, a set of seismic data compression software has been developed.