This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of...This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.展开更多
Traditionally, fractal image compression suffers from lengthy encoding time in measure ofhours. In this paper, combined with characteristlcs of human visual system, a flexible classification technique is proposed. Thi...Traditionally, fractal image compression suffers from lengthy encoding time in measure ofhours. In this paper, combined with characteristlcs of human visual system, a flexible classification technique is proposed. This yields a corresponding adaptive algorithm which can cut down the encoding timeinto second's magnitude. Experiment results suggest that the algorithm can balance the overall encodingperformance efficiently, that is, with a higher speed and a better PSNR gain.展开更多
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the neces...In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.展开更多
In order to eliminate float-point operations for fast wavelet transform, an integer D9/7 biorthogonal reversible wavelet transform was accomplished by lifting scheme. The lifting scheme based wavelet transform can be ...In order to eliminate float-point operations for fast wavelet transform, an integer D9/7 biorthogonal reversible wavelet transform was accomplished by lifting scheme. The lifting scheme based wavelet transform can be finished by addition and shift simply. It improved the quality of reconstructive image and greatly reduced the computational complexity due to integer operation. It is suitable for real-time image coding on hardware such as DSP. The simulation results show that the lifting scheme based SPIHT is prior to traditional wavelet based SPHIT in quality and complexity.展开更多
A high performance scalable image coding algorithm is proposed. The salient features of this algorithm are the ways to form and locate the significant clusters. Thanks to the list structure, the new coding algorithm a...A high performance scalable image coding algorithm is proposed. The salient features of this algorithm are the ways to form and locate the significant clusters. Thanks to the list structure, the new coding algorithm achieves fine fractional bit-plane coding with negligible additional complexity. Experiments show that it performs comparably or better than the state-of-the-art coders. Furthermore, the flexible codec supports both quality and resolution scalability, which is very attractive in many network applications.展开更多
Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborh...Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborhood match method using genetic algorithm is used to conceal the error blocks. Experimental results show that the searching space can be greatly reduced by using genetic algorithm compared with exhaustive searching method, and good image quality is achieved. The peak signal noise ratios(PSNRs) of the restored images are increased greatly.展开更多
A novel paradigm for fractal coding selectively corrects the fractal code for selected domain blocks with an image-adaptive VQ codebook. The codebook is generated from the initial uncorrected fractal code and is, ther...A novel paradigm for fractal coding selectively corrects the fractal code for selected domain blocks with an image-adaptive VQ codebook. The codebook is generated from the initial uncorrected fractal code and is, therefore, available at the decoder. An efficient trade-off is generated between incremental performance and bit rate.展开更多
A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high co...A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.展开更多
The paper deals with a new VQ+DPCM+DCT algorithm based on Self-Organizing Feature Maps(SOFM) algorithm for image coding. In addition. a Frequency sensitive SOFM (FSOFM) has been also devel-oped. Simulation results sh...The paper deals with a new VQ+DPCM+DCT algorithm based on Self-Organizing Feature Maps(SOFM) algorithm for image coding. In addition. a Frequency sensitive SOFM (FSOFM) has been also devel-oped. Simulation results show that a very good visual quality of the coded image at 0.252 bits/pixel is obtained.展开更多
With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acqui...With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.展开更多
In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
A new remote sensing image coding scheme based on the wavelet transform and classified vector quantization (CVQ) is proposed. The original image is first decomposed into a hierarchy of 3 layers including 10 subimages ...A new remote sensing image coding scheme based on the wavelet transform and classified vector quantization (CVQ) is proposed. The original image is first decomposed into a hierarchy of 3 layers including 10 subimages by DWT. The lowest frequency subimage is compressed by scalar quantization and ADPCM. The high frequency subimages are compressed by CVQ to utilize the similarity among different resolutions while improving the edge quality and reducing computational complexity. The experimental results show that the proposed scheme has a better performance than JPEG, and a PSNR of reconstructed image is 31~33 dB with a rate of 0.2 bpp.展开更多
A new fast two-dimension 8×8 discrete cosine transform (2D 8×8 DCT) algorithm based on the characteristics of the basic images of 2D DCT is presented. The new algorithm computes each DCT coefficient in tur...A new fast two-dimension 8×8 discrete cosine transform (2D 8×8 DCT) algorithm based on the characteristics of the basic images of 2D DCT is presented. The new algorithm computes each DCT coefficient in turn more independently. Hence, the new algorithm is suitable for 2D DCT pruning algorithm of pruning away any number of high-frequency components of 2D DCT. The proposed pruning algorithm is more efficient than the existing pruning 2D DCT algorithms in terms of the number of arithmetic operations, especially the number of multiplications required in the computation.展开更多
A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A f...A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A fast zerotree image coding (FZIC) algorithm is proposed by using a simple sequential scan order and two flag maps.The VLSI structure for FZIC is then presented.By combining 2 D DWT and FZIC,a wavelet image coder is finally designed.The coder is programmed,simulated,synthesized,and successfully verified by ALTERA CPLD.展开更多
Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy d...Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy defined by certain criterion. Then we derive the adaptive wavelet decomposition tree (AWDT) and the way of adjustable compression ratio. According to the feature of AWDT, this paper also deals with the strategies which are used to handle different sub-images in the procedure of quantification and coding of the wavelet coefficients. Through experiments, not only the algorithm in the paper can adapt to various images, but also the quality of recovered image is improved though compression ratio is higher and adjustable. When their compression ratios are near, the quality of subjective vision and PSNR of the algorithm are better than those of JPEG algorithm.展开更多
To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented...To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.展开更多
Multiple description coding has recently been proposed as a joint source and channel coding to solve the problem of robust image transmission over unreliable network, and it can offer a variety of tradeoff between sig...Multiple description coding has recently been proposed as a joint source and channel coding to solve the problem of robust image transmission over unreliable network, and it can offer a variety of tradeoff between signal redundancy and transmission robustness. In this letter, a novel pre- and post-processing method with flexible redundancy insertion for polyphase downsampling multiple de- scription coding is presented. The proposed method can be implemented as pre- and post-processing to all standards for image and video communications, with obvious advantages. Simulation results show that this approach can reduce the computational complexity while provide a flexible redundancy in- sertion to make the system robust for any packet loss situation over different networks.展开更多
With the advancements in nuclear energy,methods that can accurately obtain the spatial information of radioactive sources have become essential for nuclear energy safety.Coded aperture imaging technology is widely use...With the advancements in nuclear energy,methods that can accurately obtain the spatial information of radioactive sources have become essential for nuclear energy safety.Coded aperture imaging technology is widely used because it provides two-dimensional distribution information of radioactive sources.The coded array is a major component of a coded aperture gamma camera,and it affects the key performance parameters of the camera.Currently,commonly used coded arrays such as uniformly redundant arrays(URAs)and modified uniformly redundant arrays(MURAs)have prime numbers of rows or columns and may lead to wastage of detector pixels.A 16×16 coded array was designed on the basis of an existing 16×16 multi-pixel position-sensitive cadmium zinc telluride detector.The digital signal-to-noise(SNR)ratio of the point spread function at the center of the array is 25.67.Furthermore,Monte Carlo camera models and experimental devices based on rank-13 MURA and rank-16 URA have been constructed.With the same angular resolution,the field size of view under rank-16 URA is 1.53 times that of under rank-13 MURA.Simulations(Am-241,Co-57,Ir-192,Cs-137)and experiments(Co-57)are conducted to compare the imaging performance between rank-16 URA and rank-13 MURA.The contrast-to-noise ratio of the reconstructed image of the rank-16 array is great and only slightly lower than that of rank-13 MURA.However,as the photon energy increases,the gap becomes almost negligible.展开更多
文摘This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.
文摘Traditionally, fractal image compression suffers from lengthy encoding time in measure ofhours. In this paper, combined with characteristlcs of human visual system, a flexible classification technique is proposed. This yields a corresponding adaptive algorithm which can cut down the encoding timeinto second's magnitude. Experiment results suggest that the algorithm can balance the overall encodingperformance efficiently, that is, with a higher speed and a better PSNR gain.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
文摘In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.
基金The Ministerial Level Advanced Research Foundation
文摘In order to eliminate float-point operations for fast wavelet transform, an integer D9/7 biorthogonal reversible wavelet transform was accomplished by lifting scheme. The lifting scheme based wavelet transform can be finished by addition and shift simply. It improved the quality of reconstructive image and greatly reduced the computational complexity due to integer operation. It is suitable for real-time image coding on hardware such as DSP. The simulation results show that the lifting scheme based SPIHT is prior to traditional wavelet based SPHIT in quality and complexity.
文摘A high performance scalable image coding algorithm is proposed. The salient features of this algorithm are the ways to form and locate the significant clusters. Thanks to the list structure, the new coding algorithm achieves fine fractional bit-plane coding with negligible additional complexity. Experiments show that it performs comparably or better than the state-of-the-art coders. Furthermore, the flexible codec supports both quality and resolution scalability, which is very attractive in many network applications.
文摘Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborhood match method using genetic algorithm is used to conceal the error blocks. Experimental results show that the searching space can be greatly reduced by using genetic algorithm compared with exhaustive searching method, and good image quality is achieved. The peak signal noise ratios(PSNRs) of the restored images are increased greatly.
文摘A novel paradigm for fractal coding selectively corrects the fractal code for selected domain blocks with an image-adaptive VQ codebook. The codebook is generated from the initial uncorrected fractal code and is, therefore, available at the decoder. An efficient trade-off is generated between incremental performance and bit rate.
文摘A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.
文摘The paper deals with a new VQ+DPCM+DCT algorithm based on Self-Organizing Feature Maps(SOFM) algorithm for image coding. In addition. a Frequency sensitive SOFM (FSOFM) has been also devel-oped. Simulation results show that a very good visual quality of the coded image at 0.252 bits/pixel is obtained.
基金This project was supported by the National Natural Science Foundation (No. 69972027).
文摘With the advances of display technology, three-dimensional(3-D) imaging systems are becoming increasingly popular. One way of stimulating 3-D perception is to use stereo pairs, a pair of images of the same scene acquired from different perspectives. Since there is an inherent redundancy between the images of a stereo pairs, data compression algorithms should be employed to represent stereo pairs efficiently. The proposed techniques generally use block-based disparity compensation. In order to get the higher compression ratio, this paper employs the wavelet-based mixed-resolution coding technique to incorporate with SPT-based disparity-compensation to compress the stereo image data. The mixed-resolution coding is a perceptually justified technique that is achieved by presenting one eye with a low-resolution image and the other with a high-resolution image. Psychophysical experiments show that the stereo image pairs with one high-resolution image and one low-resolution image provide almost the same stereo depth to that of a stereo image with two high-resolution images. By combining the mixed-resolution coding and SPT-based disparity-compensation techniques, one reference (left) high-resolution image can be compressed by a hierarchical wavelet transform followed by vector quantization and Huffman encoder. After two level wavelet decompositions, for the low-resolution right image and low-resolution left image, subspace projection technique using the fixed block size disparity compensation estimation is used. At the decoder, the low-resolution right subimage is estimated using the disparity from the low-resolution left subimage. A full-size reconstruction is obtained by upsampling a factor of 4 and reconstructing with the synthesis low pass filter. Finally, experimental results are presented, which show that our scheme achieves a PSNR gain (about 0.92dB) as compared to the current block-based disparity compensation coding techniques.
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
文摘A new remote sensing image coding scheme based on the wavelet transform and classified vector quantization (CVQ) is proposed. The original image is first decomposed into a hierarchy of 3 layers including 10 subimages by DWT. The lowest frequency subimage is compressed by scalar quantization and ADPCM. The high frequency subimages are compressed by CVQ to utilize the similarity among different resolutions while improving the edge quality and reducing computational complexity. The experimental results show that the proposed scheme has a better performance than JPEG, and a PSNR of reconstructed image is 31~33 dB with a rate of 0.2 bpp.
基金Supported by the National Basic Research Program of China (Grant No.2006CB303102)the National Natural Science Foundation of China(Grant Nos.60573114,60533030 and 60573181)
文摘A new fast two-dimension 8×8 discrete cosine transform (2D 8×8 DCT) algorithm based on the characteristics of the basic images of 2D DCT is presented. The new algorithm computes each DCT coefficient in turn more independently. Hence, the new algorithm is suitable for 2D DCT pruning algorithm of pruning away any number of high-frequency components of 2D DCT. The proposed pruning algorithm is more efficient than the existing pruning 2D DCT algorithms in terms of the number of arithmetic operations, especially the number of multiplications required in the computation.
文摘A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A fast zerotree image coding (FZIC) algorithm is proposed by using a simple sequential scan order and two flag maps.The VLSI structure for FZIC is then presented.By combining 2 D DWT and FZIC,a wavelet image coder is finally designed.The coder is programmed,simulated,synthesized,and successfully verified by ALTERA CPLD.
文摘Through research for image compression based on wavelet analysis in recent years, we put forward an adaptive wavelet decomposition strategy. Whether sub-images are to be decomposed or not are decided by their energy defined by certain criterion. Then we derive the adaptive wavelet decomposition tree (AWDT) and the way of adjustable compression ratio. According to the feature of AWDT, this paper also deals with the strategies which are used to handle different sub-images in the procedure of quantification and coding of the wavelet coefficients. Through experiments, not only the algorithm in the paper can adapt to various images, but also the quality of recovered image is improved though compression ratio is higher and adjustable. When their compression ratios are near, the quality of subjective vision and PSNR of the algorithm are better than those of JPEG algorithm.
基金supported by the National Science Foundation of China(60872109)the Program for New Century Excellent Talents in University(NCET-06-0900)
文摘To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.
基金Funded by the National Natural Science Foundation of China (No.60671037)Ningbo Industry Foundation (No.2007B10051)+2 种基金Zhejiang Province Key Industry Foundation (No.2006C11200)Scientific Research Fund of Zhejiang Provincial Education Department (No.20070956, No.20070978, No.20061661)Ningbo University Foundation (XK0610031)
文摘Multiple description coding has recently been proposed as a joint source and channel coding to solve the problem of robust image transmission over unreliable network, and it can offer a variety of tradeoff between signal redundancy and transmission robustness. In this letter, a novel pre- and post-processing method with flexible redundancy insertion for polyphase downsampling multiple de- scription coding is presented. The proposed method can be implemented as pre- and post-processing to all standards for image and video communications, with obvious advantages. Simulation results show that this approach can reduce the computational complexity while provide a flexible redundancy in- sertion to make the system robust for any packet loss situation over different networks.
基金supported by the National Natural Science Foundation of China(No.11675078)the Primary Research and Development Plan of Jiangsu Province(No.BE2017729)the Foundation of Graduate Innovation Center in NUAA(No.kfjj20190614)。
文摘With the advancements in nuclear energy,methods that can accurately obtain the spatial information of radioactive sources have become essential for nuclear energy safety.Coded aperture imaging technology is widely used because it provides two-dimensional distribution information of radioactive sources.The coded array is a major component of a coded aperture gamma camera,and it affects the key performance parameters of the camera.Currently,commonly used coded arrays such as uniformly redundant arrays(URAs)and modified uniformly redundant arrays(MURAs)have prime numbers of rows or columns and may lead to wastage of detector pixels.A 16×16 coded array was designed on the basis of an existing 16×16 multi-pixel position-sensitive cadmium zinc telluride detector.The digital signal-to-noise(SNR)ratio of the point spread function at the center of the array is 25.67.Furthermore,Monte Carlo camera models and experimental devices based on rank-13 MURA and rank-16 URA have been constructed.With the same angular resolution,the field size of view under rank-16 URA is 1.53 times that of under rank-13 MURA.Simulations(Am-241,Co-57,Ir-192,Cs-137)and experiments(Co-57)are conducted to compare the imaging performance between rank-16 URA and rank-13 MURA.The contrast-to-noise ratio of the reconstructed image of the rank-16 array is great and only slightly lower than that of rank-13 MURA.However,as the photon energy increases,the gap becomes almost negligible.