Image denoising has become one of the major forms of image enhancement methods that form the basis of image processing. Due to the inconsistencies in the machinery producing these signals, medical images tend to requi...Image denoising has become one of the major forms of image enhancement methods that form the basis of image processing. Due to the inconsistencies in the machinery producing these signals, medical images tend to require these techniques. In real time, images do not contain a single noise, and instead they contain multiple types of noise distributions in several indistinct regions. This paper presents an image denoising method that uses Metaheuristics to perform noise identification. Adaptive block selection is used to identify and correct the noise contained in these blocks. Though the system uses a block selection scheme, modifications are performed on pixel- to-pixel basis and not on the entire blocks;hence the image accuracy is preserved. PSO is used to identify the noise distribution, and appropriate noise correction techniques are applied to denoise the images. Experiments were conducted using salt and pepper noise, Gaussian noise and a combination of both the noise in the same image. It was observed that the proposed method performed effectively on noise levels up-to 0.5 and was able to produce results with PSNR values ranging from 20 to 30 in most of the cases. Excellent reduction rates were observed on salt and pepper noise and moderate reduction rates were observed on Gaussian noise. Experimental results show that our proposed system has a wide range of applicability in any domain specific image denoising scenario, such as medical imaging, mammogram etc.展开更多
Fractal image compression is a completely new method to compress images by searching and exploiting the self similarity of the whole image . Fractal Block Coding (FBC) is a practicable fractal coding schem...Fractal image compression is a completely new method to compress images by searching and exploiting the self similarity of the whole image . Fractal Block Coding (FBC) is a practicable fractal coding scheme with annoying slow encoding speed . In this paper, we classify the image blocks by Classified Vector Quantization (CVQ) technique and present an Adaptive Block Truncation Coding (ABTC) scheme to process the midrange blocks in the image. By this method , we reduce the encoding time to one forty fifth comparing to ordinary FBC method with little change in compression ratio and a little decreased coded image quality.展开更多
We describe an efficient and easily applicable data deduplication framework with heuristic prediction based adaptive block skipping for the real-world dataset such as disk images to save deduplication related overhead...We describe an efficient and easily applicable data deduplication framework with heuristic prediction based adaptive block skipping for the real-world dataset such as disk images to save deduplication related overheads and improve deduplication throughput with good deduplication efficiency maintained. Under the framework, deduplication operations are skipped for data chunks determined as likely non-duplicates via heuristic prediction, in conjunction with a hit and matching extension process for duplication identification within skipped blocks and a hysteresis mechanism based hash indexing process to update the hash indices for the re-encountered skipped chunks. For performance evaluation, the proposed framework was integrated and implemented in the existing data domain and sparse indexing deduplication algorithms. The experimental results based on a real-world dataset of 1.0 TB disk images showed that the deduplication related overheads were significantly reduced with adaptive block skipping, leading to a 30%-80% improvement in deduplication throughput when deduplieation mctadata were stored on the disk for data domain, and 25%-40% RAM space saving with a 15%-20% improvement in deduplication throughput when an in-RAM sparse index was used in sparse indexing. In both cases, the corresponding deduplication ratios reduced were below 5%.展开更多
Many networks are designed to stack a large number of residual blocks,deepen the network and improve network performance through short residual connec-tion,long residual connection,and dense connection.However,without...Many networks are designed to stack a large number of residual blocks,deepen the network and improve network performance through short residual connec-tion,long residual connection,and dense connection.However,without consider-ing different contributions of different depth features to the network,these de-signs have the problem of evaluating the importance of different depth features.To solve this problem,this paper proposes an adaptive densely residual net-work(ADRNet)for the single image super resolution.ADRN realizes the evalua-tion of distributions of different depth features and learns more representative features.An adaptive densely residual block(ADRB)was designed,combining 3 residual blocks(RB)and dense connection was added.It learned the attention score of each dense connection through adaptive dense connections,and the at-tention score reflected the importance of the features of each RB.To further en-hance the performance of ADRB,a multi-direction attention block(MDAB)was introduced to obtain multidirectional context information.Through comparative experiments,it is proved that theproposed ADRNet is superior to the existing methods.Through ablation experiments,it is proved that evaluating features of different depths helps to improve network performance.展开更多
When the saturation degree (SD) of space-borne SAR raw data is high, the performance of conventional block adaptive quantization (BAQ) deteriorates obviously. In order to overcome the drawback, this paper studies ...When the saturation degree (SD) of space-borne SAR raw data is high, the performance of conventional block adaptive quantization (BAQ) deteriorates obviously. In order to overcome the drawback, this paper studies the mapping between the average signal magnitude (ASM) and the standard deviation of the input signal (SDIS) to the A/D from the original reference. Then, it points out the mistake of the mapping and introduces the concept of the standard deviation of the output signal (SDOS) from the A/D. After that, this paper educes the mapping between the ASM and SDOS from the A/D. Monte-Carlo experiment shows that none of the above two mappings is the optimal in the whole set of SD. Thus, this paper proposes the concept of piecewise linear mapping and the searching algorithm in the whole set of SD. According to the linear part, this paper gives the certification and analytical value of k and for nonlinear part, and utilizes the searching algorithm mentioned above to search the corresponding value of k. Experimental results based on simulated data and real data show that the performance of new algorithm is better than conventional BAQ when raw data is in heavy SD.展开更多
文摘Image denoising has become one of the major forms of image enhancement methods that form the basis of image processing. Due to the inconsistencies in the machinery producing these signals, medical images tend to require these techniques. In real time, images do not contain a single noise, and instead they contain multiple types of noise distributions in several indistinct regions. This paper presents an image denoising method that uses Metaheuristics to perform noise identification. Adaptive block selection is used to identify and correct the noise contained in these blocks. Though the system uses a block selection scheme, modifications are performed on pixel- to-pixel basis and not on the entire blocks;hence the image accuracy is preserved. PSO is used to identify the noise distribution, and appropriate noise correction techniques are applied to denoise the images. Experiments were conducted using salt and pepper noise, Gaussian noise and a combination of both the noise in the same image. It was observed that the proposed method performed effectively on noise levels up-to 0.5 and was able to produce results with PSNR values ranging from 20 to 30 in most of the cases. Excellent reduction rates were observed on salt and pepper noise and moderate reduction rates were observed on Gaussian noise. Experimental results show that our proposed system has a wide range of applicability in any domain specific image denoising scenario, such as medical imaging, mammogram etc.
文摘Fractal image compression is a completely new method to compress images by searching and exploiting the self similarity of the whole image . Fractal Block Coding (FBC) is a practicable fractal coding scheme with annoying slow encoding speed . In this paper, we classify the image blocks by Classified Vector Quantization (CVQ) technique and present an Adaptive Block Truncation Coding (ABTC) scheme to process the midrange blocks in the image. By this method , we reduce the encoding time to one forty fifth comparing to ordinary FBC method with little change in compression ratio and a little decreased coded image quality.
基金This work is supported by the National Science Fund for Distinguished Young Scholars of China under Grant No. 61125102 and the Key Program of National Natural Science Foundation of China under Grant No. 61133008.
文摘We describe an efficient and easily applicable data deduplication framework with heuristic prediction based adaptive block skipping for the real-world dataset such as disk images to save deduplication related overheads and improve deduplication throughput with good deduplication efficiency maintained. Under the framework, deduplication operations are skipped for data chunks determined as likely non-duplicates via heuristic prediction, in conjunction with a hit and matching extension process for duplication identification within skipped blocks and a hysteresis mechanism based hash indexing process to update the hash indices for the re-encountered skipped chunks. For performance evaluation, the proposed framework was integrated and implemented in the existing data domain and sparse indexing deduplication algorithms. The experimental results based on a real-world dataset of 1.0 TB disk images showed that the deduplication related overheads were significantly reduced with adaptive block skipping, leading to a 30%-80% improvement in deduplication throughput when deduplieation mctadata were stored on the disk for data domain, and 25%-40% RAM space saving with a 15%-20% improvement in deduplication throughput when an in-RAM sparse index was used in sparse indexing. In both cases, the corresponding deduplication ratios reduced were below 5%.
文摘Many networks are designed to stack a large number of residual blocks,deepen the network and improve network performance through short residual connec-tion,long residual connection,and dense connection.However,without consider-ing different contributions of different depth features to the network,these de-signs have the problem of evaluating the importance of different depth features.To solve this problem,this paper proposes an adaptive densely residual net-work(ADRNet)for the single image super resolution.ADRN realizes the evalua-tion of distributions of different depth features and learns more representative features.An adaptive densely residual block(ADRB)was designed,combining 3 residual blocks(RB)and dense connection was added.It learned the attention score of each dense connection through adaptive dense connections,and the at-tention score reflected the importance of the features of each RB.To further en-hance the performance of ADRB,a multi-direction attention block(MDAB)was introduced to obtain multidirectional context information.Through comparative experiments,it is proved that theproposed ADRNet is superior to the existing methods.Through ablation experiments,it is proved that evaluating features of different depths helps to improve network performance.
文摘When the saturation degree (SD) of space-borne SAR raw data is high, the performance of conventional block adaptive quantization (BAQ) deteriorates obviously. In order to overcome the drawback, this paper studies the mapping between the average signal magnitude (ASM) and the standard deviation of the input signal (SDIS) to the A/D from the original reference. Then, it points out the mistake of the mapping and introduces the concept of the standard deviation of the output signal (SDOS) from the A/D. After that, this paper educes the mapping between the ASM and SDOS from the A/D. Monte-Carlo experiment shows that none of the above two mappings is the optimal in the whole set of SD. Thus, this paper proposes the concept of piecewise linear mapping and the searching algorithm in the whole set of SD. According to the linear part, this paper gives the certification and analytical value of k and for nonlinear part, and utilizes the searching algorithm mentioned above to search the corresponding value of k. Experimental results based on simulated data and real data show that the performance of new algorithm is better than conventional BAQ when raw data is in heavy SD.