Achieving a good recognition rate for degraded document images is difficult as degraded document images suffer from low contrast,bleedthrough,and nonuniform illumination effects.Unlike the existing baseline thresholdi...Achieving a good recognition rate for degraded document images is difficult as degraded document images suffer from low contrast,bleedthrough,and nonuniform illumination effects.Unlike the existing baseline thresholding techniques that use fixed thresholds and windows,the proposed method introduces a concept for obtaining dynamic windows according to the image content to achieve better binarization.To enhance a low-contrast image,we proposed a new mean histogram stretching method for suppressing noisy pixels in the background and,simultaneously,increasing pixel contrast at edges or near edges,which results in an enhanced image.For the enhanced image,we propose a new method for deriving adaptive local thresholds for dynamic windows.The dynamic window is derived by exploiting the advantage of Otsu thresholding.To assess the performance of the proposed method,we have used standard databases,namely,document image binarization contest(DIBCO),for experimentation.The comparative study on well-known existing methods indicates that the proposed method outperforms the existing methods in terms of quality and recognition rate.展开更多
Background:Contrast enhancement plays an important role in the image processing field.Contrast correction has performed an adjustment on the darkness or brightness of the input image and increases the quality of the i...Background:Contrast enhancement plays an important role in the image processing field.Contrast correction has performed an adjustment on the darkness or brightness of the input image and increases the quality of the image.Objective:This paper proposed a novel method based on statistical data from the local mean and local standard deviation.Method:The proposed method modifies the mean and standard deviation of a neighbourhood at each pixel and divides it into three categories:background,foreground,and problematic(contrast&luminosity)region.Experimental results from both visual and objective aspects show that the proposed method can normalize the contrast variation problem effectively compared to Histogram Equalization(HE),Difference of Gaussian(DoG),and Butterworth Homomorphic Filtering(BHF).Seven(7)types of binarization methods were tested on the corrected image and produced a positive and impressive result.Result:Finally,a comparison in terms of Signal Noise Ratio(SNR),Misclassification Error(ME),F-measure,Peak Signal Noise Ratio(PSNR),Misclassification Penalty Metric(MPM),and Accuracy was calculated.Each binarization method shows an incremented result after applying it onto the corrected image compared to the original image.The SNR result of our proposed image is 9.350 higher than the three(3)other methods.The average increment after five(5)types of evaluation are:(Otsu=41.64%,Local Adaptive=7.05%,Niblack=30.28%,Bernsen=25%,Bradley=3.54%,Nick=1.59%,Gradient-Based=14.6%).Conclusion:The results presented in this paper effectively solve the contrast problem and finally produce better quality images.展开更多
The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogr...The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types; background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method;secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution's HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by 2 and L . Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.展开更多
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of te...Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.展开更多
The development of document image databases is becoming a challenge for document image retrieval tech-niques.Traditional layout-reconstructed-based methods rely on high quality document images as well as an optical ch...The development of document image databases is becoming a challenge for document image retrieval tech-niques.Traditional layout-reconstructed-based methods rely on high quality document images as well as an optical char-acter recognition(OCR)precision,and can only deal with several widely used languages.The complexity of document layouts greatly hinders layout analysis-based approaches.This paper describes a multi-density feature based algorithm for binary document images,which is independent of OCR or layout analyses.The text area was extracted after prepro-cessing such as skew correction and marginal noise removal.Then the aspect ratio and multi-density features were extract-ed from the text area to select the best candidates from the document image database.Experimental results show that this approach is simple with loss rates less than 3%and can efficiently analyze images with different resolutions and dif-ferent input systems.The system is also robust to noise due to its notes and complex layouts,etc.展开更多
In the digital world,a wide range of handwritten and printed documents should be converted to digital format using a variety of tools,including mobile phones and scanners.Unfortunately,this is not an optimal procedure...In the digital world,a wide range of handwritten and printed documents should be converted to digital format using a variety of tools,including mobile phones and scanners.Unfortunately,this is not an optimal procedure,and the entire document image might be degraded.Imperfect conversion effects due to noise,motion blur,and skew distortion can lead to significant impact on the accuracy and effectiveness of document image segmentation and analysis in Optical Character Recognition(OCR)systems.In Document Image Analysis Systems(DIAS),skew estimation of images is a crucial step.In this paper,a novel,fast,and reliable skew detection algorithm based on the Radon Transform and Curve Length Fitness Function(CLF),so-called Radon CLF,was proposed.The Radon CLF model aims to take advantage of the properties of Radon spaces.The Radon CLF explores the dominating angle more effectively for a 1D signal than it does for a 2D input image due to an innovative fitness function formulation for a projected signal of the Radon space.Several significant performance indicators,including Mean Square Error(MSE),Mean Absolute Error(MAE),Peak Signal-to-Noise Ratio(PSNR),Structural Similarity Measure(SSIM),Accuracy,and run-time,were taken into consideration when assessing the performance of our model.In addition,a new dataset named DSI5000 was constructed to assess the accuracy of the CLF model.Both two-dimensional image signal and the Radon space have been used in our simulations to compare the noise effect.Obtained results show that the proposed method is more effective than other approaches already in use,with an accuracy of roughly 99.87%and a run-time of 0.048(s).The introduced model is far more accurate and timeefficient than current approaches in detecting image skew.展开更多
Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule base...Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria. However, in a knowledge learning system there is usually a set of rules. These rules are not independent, but interactive. They tend to affect each other and form a rulesystem. In such case, it is no longer reasonable to isolate each rule from others for evaluation. A best rule according to certain criterion is not always the best one for the whole system. Furthermore, the data in the real world from which people want to create their learning system are often ill-defined and inconsistent. In this case, the completeness and consistency criteria for rule selection are no longer essential. In this paper, some ideas about how to solve the rule-selection problem in a systematic way are proposed. These ideas have been applied in the design of a Chinese business card layout analysis system and gained a good result on the training data set of 425 images. The implementation of the system and the result are presented in this paper.展开更多
This paper proposes a new approach to the water flow algorithm for text line segmentation. In the basic method the hypothetical water flows under few specified angles which have been defined by water flow angle as par...This paper proposes a new approach to the water flow algorithm for text line segmentation. In the basic method the hypothetical water flows under few specified angles which have been defined by water flow angle as parameter. It is applied to the document image frame from left to right and vice versa. As a result, the unwetted and wetted areas are established. These areas separate text from non-text elements in each text line, respectively. Hence, they represent the control areas that are of major importance for text line segmentation. Primarily, an extended approach means extraction of the connected-components by bounding boxes over text. By this way, each connected component is mutually separated. Hence, the water flow angle, which defines the unwetted areas, is determined adaptively. By choosing appropriate water flow angle, the unwetted areas are lengthening which leads to the better text line segmentation. Results of this approach are encouraging due to the text line segmentation improvement which is the most challenging step in document image processing.展开更多
基金funded by the Ministry of Higher Education,Malaysia for providing facilities and financial support under the Long Research Grant Scheme LRGS-1-2019-UKM-UKM-2-7.
文摘Achieving a good recognition rate for degraded document images is difficult as degraded document images suffer from low contrast,bleedthrough,and nonuniform illumination effects.Unlike the existing baseline thresholding techniques that use fixed thresholds and windows,the proposed method introduces a concept for obtaining dynamic windows according to the image content to achieve better binarization.To enhance a low-contrast image,we proposed a new mean histogram stretching method for suppressing noisy pixels in the background and,simultaneously,increasing pixel contrast at edges or near edges,which results in an enhanced image.For the enhanced image,we propose a new method for deriving adaptive local thresholds for dynamic windows.The dynamic window is derived by exploiting the advantage of Otsu thresholding.To assess the performance of the proposed method,we have used standard databases,namely,document image binarization contest(DIBCO),for experimentation.The comparative study on well-known existing methods indicates that the proposed method outperforms the existing methods in terms of quality and recognition rate.
文摘Background:Contrast enhancement plays an important role in the image processing field.Contrast correction has performed an adjustment on the darkness or brightness of the input image and increases the quality of the image.Objective:This paper proposed a novel method based on statistical data from the local mean and local standard deviation.Method:The proposed method modifies the mean and standard deviation of a neighbourhood at each pixel and divides it into three categories:background,foreground,and problematic(contrast&luminosity)region.Experimental results from both visual and objective aspects show that the proposed method can normalize the contrast variation problem effectively compared to Histogram Equalization(HE),Difference of Gaussian(DoG),and Butterworth Homomorphic Filtering(BHF).Seven(7)types of binarization methods were tested on the corrected image and produced a positive and impressive result.Result:Finally,a comparison in terms of Signal Noise Ratio(SNR),Misclassification Error(ME),F-measure,Peak Signal Noise Ratio(PSNR),Misclassification Penalty Metric(MPM),and Accuracy was calculated.Each binarization method shows an incremented result after applying it onto the corrected image compared to the original image.The SNR result of our proposed image is 9.350 higher than the three(3)other methods.The average increment after five(5)types of evaluation are:(Otsu=41.64%,Local Adaptive=7.05%,Niblack=30.28%,Bernsen=25%,Bradley=3.54%,Nick=1.59%,Gradient-Based=14.6%).Conclusion:The results presented in this paper effectively solve the contrast problem and finally produce better quality images.
文摘The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types; background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method;secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution's HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by 2 and L . Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.
基金supported by the Innovation Platform Construction of Qinghai Province(No.2016-ZJ-Y04)the Basic Research Program of Qinghai Province(No.2016-ZJ-740)
文摘Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks' projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(Grant No.60472028)the Specialized Research Fund for the Doctoral Program of Higher Education(No.20040003015).
文摘The development of document image databases is becoming a challenge for document image retrieval tech-niques.Traditional layout-reconstructed-based methods rely on high quality document images as well as an optical char-acter recognition(OCR)precision,and can only deal with several widely used languages.The complexity of document layouts greatly hinders layout analysis-based approaches.This paper describes a multi-density feature based algorithm for binary document images,which is independent of OCR or layout analyses.The text area was extracted after prepro-cessing such as skew correction and marginal noise removal.Then the aspect ratio and multi-density features were extract-ed from the text area to select the best candidates from the document image database.Experimental results show that this approach is simple with loss rates less than 3%and can efficiently analyze images with different resolutions and dif-ferent input systems.The system is also robust to noise due to its notes and complex layouts,etc.
文摘In the digital world,a wide range of handwritten and printed documents should be converted to digital format using a variety of tools,including mobile phones and scanners.Unfortunately,this is not an optimal procedure,and the entire document image might be degraded.Imperfect conversion effects due to noise,motion blur,and skew distortion can lead to significant impact on the accuracy and effectiveness of document image segmentation and analysis in Optical Character Recognition(OCR)systems.In Document Image Analysis Systems(DIAS),skew estimation of images is a crucial step.In this paper,a novel,fast,and reliable skew detection algorithm based on the Radon Transform and Curve Length Fitness Function(CLF),so-called Radon CLF,was proposed.The Radon CLF model aims to take advantage of the properties of Radon spaces.The Radon CLF explores the dominating angle more effectively for a 1D signal than it does for a 2D input image due to an innovative fitness function formulation for a projected signal of the Radon space.Several significant performance indicators,including Mean Square Error(MSE),Mean Absolute Error(MAE),Peak Signal-to-Noise Ratio(PSNR),Structural Similarity Measure(SSIM),Accuracy,and run-time,were taken into consideration when assessing the performance of our model.In addition,a new dataset named DSI5000 was constructed to assess the accuracy of the CLF model.Both two-dimensional image signal and the Radon space have been used in our simulations to compare the noise effect.Obtained results show that the proposed method is more effective than other approaches already in use,with an accuracy of roughly 99.87%and a run-time of 0.048(s).The introduced model is far more accurate and timeefficient than current approaches in detecting image skew.
文摘Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria. However, in a knowledge learning system there is usually a set of rules. These rules are not independent, but interactive. They tend to affect each other and form a rulesystem. In such case, it is no longer reasonable to isolate each rule from others for evaluation. A best rule according to certain criterion is not always the best one for the whole system. Furthermore, the data in the real world from which people want to create their learning system are often ill-defined and inconsistent. In this case, the completeness and consistency criteria for rule selection are no longer essential. In this paper, some ideas about how to solve the rule-selection problem in a systematic way are proposed. These ideas have been applied in the design of a Chinese business card layout analysis system and gained a good result on the training data set of 425 images. The implementation of the system and the result are presented in this paper.
文摘This paper proposes a new approach to the water flow algorithm for text line segmentation. In the basic method the hypothetical water flows under few specified angles which have been defined by water flow angle as parameter. It is applied to the document image frame from left to right and vice versa. As a result, the unwetted and wetted areas are established. These areas separate text from non-text elements in each text line, respectively. Hence, they represent the control areas that are of major importance for text line segmentation. Primarily, an extended approach means extraction of the connected-components by bounding boxes over text. By this way, each connected component is mutually separated. Hence, the water flow angle, which defines the unwetted areas, is determined adaptively. By choosing appropriate water flow angle, the unwetted areas are lengthening which leads to the better text line segmentation. Results of this approach are encouraging due to the text line segmentation improvement which is the most challenging step in document image processing.