期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Contrast Enhancement Using Weighted Coupled Histogram Equalization with Laplace Transform
1
作者 Huimin Hao Wenbin Xin +4 位作者 Minglong Bu He Wang Yuan Lan Xiaoyan Xiong Jiahai Huang 《Journal of Harbin Institute of Technology(New Series)》 CAS 2022年第4期32-40,共9页
Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foregro... Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time. 展开更多
关键词 contrast enhancement weighted processing histogram equalization Laplace transform
下载PDF
Histogram equalization using a reduced feature set of background speakers' utterances for speaker recognition
2
作者 Myung-jae KIM Il-ho YANG +1 位作者 Min-seok KIM Ha-jin YU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2017年第5期738-750,共13页
We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using output... We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods. 展开更多
关键词 Speaker recognition histogram equalization i-vector
原文传递
A Bi-Histogram Shifting Contrast Enhancement for Color Images
3
作者 Lord Amoah Ampofo Twumasi Kwabena 《Journal of Quantum Computing》 2021年第2期65-77,共13页
Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,gr... Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%. 展开更多
关键词 Contrast enhancement bi-histogram shifting histogram equalization
下载PDF
Alzheimer’s Disease Stage Classification Using a Deep Transfer Learning and Sparse Auto Encoder Method
4
作者 Deepthi K.Oommen J.Arunnehru 《Computers, Materials & Continua》 SCIE EI 2023年第7期793-811,共19页
Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic pro... Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance. 展开更多
关键词 Alzheimer’s disease mild cognitive impairment Weiner filter contrast limited adaptive histogram equalization transfer learning sparse autoencoder deep neural network
下载PDF
Improved Model for Genetic Algorithm-Based Accurate Lung Cancer Segmentation and Classification
5
作者 K.Jagadeesh A.Rajendran 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期2017-2032,共16页
Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images ... Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time. 展开更多
关键词 Cancer diagnosis SEGMENTATION enhancement histogram equalization probabilistic rate neural networks(PNN) classification
下载PDF
Image Preprocessing Methods Used in Meteorological Measurement of the Temperature Testing System
6
作者 Jiajia Zhang Yu Liu He Wang 《Journal of Geoscience and Environment Protection》 2016年第11期1-5,共5页
The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the... The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process. 展开更多
关键词 Temperature Testing System Thermometer Image Image Pretreatment Median Filter histogram equalization Image Binarization
下载PDF
Balanced Quantization: An Effective and Efficient Approach toQuantized Neural Networks 被引量:3
7
作者 Shu-Chang Zhou Yu-Zhi Wang +2 位作者 He Wen Qin-Yao He Yu-Heng Zou 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第4期667-682,共16页
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs... Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs. 展开更多
关键词 quantized neural network percentile histogram equalization uniform quantization
原文传递
Ultrasound liver tumor segmentation using adaptively regularized kernel-based fuzzy C means with enhanced level set algorithm
8
作者 Deepak S.Uplaonkar Virupakshappa Nagabhushan Patil 《International Journal of Intelligent Computing and Cybernetics》 EI 2022年第3期438-453,共16页
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ... Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost. 展开更多
关键词 Adaptively regularized kernel-based fuzzy C means Contrast-limited adaptive histogram equalization Level set algorithm Liver tumor segmentation Local ternary pattern
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部