Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foregro...Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.展开更多
We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using output...We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.展开更多
Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,gr...Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.展开更多
Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic pro...Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.展开更多
Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images ...Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.展开更多
The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the...The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.展开更多
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs...Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.展开更多
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ...Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.展开更多
基金Sponsored by the National Key R&D Program of China(Grant No.2018YFB1308700)the Research and Development Project of Key Core Technology and Common Technology in Shanxi Province(Grant Nos.2020XXX001,2020XXX009)。
文摘Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.
基金Project supported by the IT R&D Program of MOTIE/KEIT(No.10041610)
文摘We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.
基金supported in part by the National Natural Science Foundation of China under Grant No.61662039in part by the Jiangxi Key Natural Science Foundation under No.20192ACBL20031+1 种基金in part by the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology(NUIST)under Grant No.2019r070in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)Fund.
文摘Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.
文摘Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.
文摘Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.
文摘The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.
文摘Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.
文摘Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.