Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones a...Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middleand small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.展开更多
A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to proc...A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to process poor quality images.展开更多
Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foregro...Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.展开更多
We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using output...We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.展开更多
Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,gr...Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.展开更多
Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic pro...Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.展开更多
Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images ...Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.展开更多
In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization tec...In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.展开更多
Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal...Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal processing (DSP) hardware system is proposed,which is able to meet needs of image real-time processing.There are many approaches to enhance infrared image.But only histogram equalization is discussed because it is the most common and effective way.On the basis of histogram equalization principle,the specific procedures implemented in DSP are shown.At last the experimental results are given.展开更多
Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. T...Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima(WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean(μ),standard deviation(?), mean square error(MSE) and PSNR(peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.展开更多
Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred an...Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred and the visible distance shortened,seriously impairing the reliability of the video system.In order to satisfy the requirement of image processing in real-time,the normal distribution curve fitting technology is used to fit the histogram of the sky part and the region growing method is used to segment the region of sky.As for the non-sky part,a method of self-adaptive interpolation to equalize the histogram is adopted to enhance the contrast of the images.Experiment results show that the method works well and will not cause block effect.展开更多
The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the...The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.展开更多
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs...Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.展开更多
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ...Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.展开更多
基金sponsored by the National Science&Technology Major Special Project(Grant No.2011ZX05025-001-04)
文摘Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middleand small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.
文摘A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to process poor quality images.
基金Sponsored by the National Key R&D Program of China(Grant No.2018YFB1308700)the Research and Development Project of Key Core Technology and Common Technology in Shanxi Province(Grant Nos.2020XXX001,2020XXX009)。
文摘Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.
基金Project supported by the IT R&D Program of MOTIE/KEIT(No.10041610)
文摘We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.
基金supported in part by the National Natural Science Foundation of China under Grant No.61662039in part by the Jiangxi Key Natural Science Foundation under No.20192ACBL20031+1 种基金in part by the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology(NUIST)under Grant No.2019r070in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)Fund.
文摘Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.
文摘Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.
文摘Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.
基金This work is supported by the research project (grant No. G20000467) of the Institute of Geology and Geophysics, CAS and bythe China Postdoctoral Science Foundation (No. 2004036083).
文摘In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.
文摘Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal processing (DSP) hardware system is proposed,which is able to meet needs of image real-time processing.There are many approaches to enhance infrared image.But only histogram equalization is discussed because it is the most common and effective way.On the basis of histogram equalization principle,the specific procedures implemented in DSP are shown.At last the experimental results are given.
基金Projects(61376076,61274026,61377024)supported by the National Natural Science Foundation of ChinaProjects(12C0108,13C321)supported by the Scientific Research Fund of Hunan Provincial Education Department,ChinaProjects(2013FJ2011,2014FJ2017,2013FJ4232)supported by the Science and Technology Plan Foundation of Hunan Province,China
文摘Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima(WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean(μ),standard deviation(?), mean square error(MSE) and PSNR(peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.
文摘Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred and the visible distance shortened,seriously impairing the reliability of the video system.In order to satisfy the requirement of image processing in real-time,the normal distribution curve fitting technology is used to fit the histogram of the sky part and the region growing method is used to segment the region of sky.As for the non-sky part,a method of self-adaptive interpolation to equalize the histogram is adopted to enhance the contrast of the images.Experiment results show that the method works well and will not cause block effect.
文摘The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.
文摘Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.
文摘Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.