Due to improper acquisition settings and other noise artifacts,the image degraded to yield poor mean preservation in brightness.The simplest way to improve the preservation is the implementation of histogram equalizat...Due to improper acquisition settings and other noise artifacts,the image degraded to yield poor mean preservation in brightness.The simplest way to improve the preservation is the implementation of histogram equalization.Because of over-enhancement,it failed to preserve the mean brightness and produce the poor quality of the image.This paper proposes a multi-scale decomposi-tion for brightness preservation using gamma correction.After transformation to hue,saturation and intensity(HSI)channel,the 2D-discrete wavelet transform decomposed the intensity component into low and high-pass coefficients.At the next phase,gamma correction is used by auto-tuning the scale value.The scale is the modified constant value used in the logarithmic function.Further,the scale value is optimized to obtain better visual quality in the image.The optimized value is the weighted distribution of standard deviation-mean of low pass coefficients.Finally,the experimental result is estimated in terms of quality assessment measures used as absolute mean brightness error,the measure of information detail,signal to noise ratio and patch-based contrast quality in the image.By comparison,the proposed method proved to be suitably remarkable in retaining the mean brightness and better visual quality of the image.展开更多
With the development of computer graphics,realistic computer graphics(CG)have become more and more common in our field of vision.This rendered image is invisible to the naked eye.How to effectively identify CG and nat...With the development of computer graphics,realistic computer graphics(CG)have become more and more common in our field of vision.This rendered image is invisible to the naked eye.How to effectively identify CG and natural images(NI)has been become a new issue in the field of digital forensics.In recent years,a series of deep learning network frameworks have shown great advantages in the field of images,which provides a good choice for us to solve this problem.This paper aims to track the latest developments and applications of deep learning in the field of CG and NI forensics in a timely manner.Firstly,it introduces the background of deep learning and the knowledge of convolutional neural networks.The purpose is to understand the basic model structure of deep learning applications in the image field,and then outlines the mainstream framework;secondly,it briefly introduces the application of deep learning in CG and NI forensics,and finally points out the problems of deep learning in this field and the prospects for the future.展开更多
In this paper, we conduct research on the novel natural image reconstruction and representation algorithm based on clustenng and modified neural network. Image resolution enhancement is one of the earliest researches ...In this paper, we conduct research on the novel natural image reconstruction and representation algorithm based on clustenng and modified neural network. Image resolution enhancement is one of the earliest researches of single image interpolation. Although the traditional interpolation and method for single image amplification is effect, but did not provide more useful information. Our method combines the neural network and the clustering approach. The experiment shows that our method performs well and satisfactory.展开更多
Rendering technology in computer graphics (CG) is now capable of producing highly photorealistic images, giving rise to the problem of how to identify CG images from natural images. Some methods were proposed to sol...Rendering technology in computer graphics (CG) is now capable of producing highly photorealistic images, giving rise to the problem of how to identify CG images from natural images. Some methods were proposed to solve this problem. In this paper, we give a novel method from a new point of view of image perception. Although the photorealistic CG images are very similar to natural images, they are surrealistic and smoother than natural images, thus leading to the difference in perception. A pert of features are derived from fractal dimension to capture the difference in color perception between CG images and natural images, and several generalized dimensions are used as the rest features to capture difference in coarseness. The effect of these features is verified by experiments. The average accuracy is over 91.2%.展开更多
With the recent tremendous advances of computer graphics rendering and image editing technologies,computergenerated fake images,which in general do not reflect what happens in the reality,can now easily deceive the in...With the recent tremendous advances of computer graphics rendering and image editing technologies,computergenerated fake images,which in general do not reflect what happens in the reality,can now easily deceive the inspection of human visual system.In this work,we propose a convolutional neural network(CNN)-based model to distinguish computergenerated(CG)images from natural images(NIs)with channel and pixel correlation.The key component of the proposed CNN architecture is a self-coding module that takes the color images as input to extract the correlation between color channels explicitly.Unlike previous approaches that directly apply CNN to solve this problem,we consider the generality of the network(or subnetwork),i.e.,the newly introduced hybrid correlation module can be directly combined with existing CNN models for enhancing the discrimination capacity of original networks.Experimental results demonstrate that the proposed network outperforms state-of-the-art methods in terms of classification performance.We also show that the newly introduced hybrid correlation module can improve the classification accuracy of different CNN architectures.展开更多
This paper proposes a Maxkov Random Field (MRF) model-based approach to natural image matting with complex scenes. After the trimap for matting is given manually, the unknown region is roughly segmented into several...This paper proposes a Maxkov Random Field (MRF) model-based approach to natural image matting with complex scenes. After the trimap for matting is given manually, the unknown region is roughly segmented into several joint sub-regions. In each sub-region, we partition the colors of neighboring background or foreground pixels into several clusters in RGB color space and assign matting label to each unknown pixel. All the labels are modelled as an MRF and the matting problem is then formulated as a maximum a posteriori (MAP) estimation problem. Simulated annealing is used to find the optimal MAP estimation. The better results can be obtained under the same user-interactions when images are complex. Results of natural image matting experiments performed on complex images using this approach are shown and compared in this paper.展开更多
This essay presents a preliminary examination of the use of natural imagery in the Inner Chapters of the Zhuangzi. The essay analyzes the characteristics of this imagery and the possible reasons for Zhuangzi's partic...This essay presents a preliminary examination of the use of natural imagery in the Inner Chapters of the Zhuangzi. The essay analyzes the characteristics of this imagery and the possible reasons for Zhuangzi's particular use of fantastic language, exaggeration, and fiction in connection with natural images. In doing so, I also compare the Zhuangzi with several important classical Chinese and Greek texts. In addition, I explore the possible motivations for and influence exerted by Zhuangzi's use of natural imagery. The essay aims to demonstrate that an analysis of the roles played by natural imagery in the Zhuangzi contributes significantly to our understanding of the work as a whole, as well as its abiding influence on later works.展开更多
In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the...In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.展开更多
There are many detectors for the least significant bit(LSB)steganography which is broadly used in hiding information in the digital images.The length of the hidden information is one of the most important parameters i...There are many detectors for the least significant bit(LSB)steganography which is broadly used in hiding information in the digital images.The length of the hidden information is one of the most important parameters in detecting steganographic information.Using 2-D gradient of a pixel and the distance between variables the proposed method gives the length of hidden information in natural grayscale images without original image.Extensive experimental results show good performance even at low embedding rate compared with other methods.Furthermore,the proposed method also works well disregarding the status of the embedded information.展开更多
Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However...Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.展开更多
Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iterat...Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iteration into a continuous flow. The stability condition depends explicitly on smoothness of the image sequence, size of image domain, value of the regularization parameter, and finally discretization step. Specifically, as the discretization step approaches to zero, stability holds unconditionally. The analysis also clarifies relations among the iterative algorithm, the original variation formulation and the PDE system. The proper regularity of solution and natural images is briefly surveyed and discussed. Experimental results validate the theoretical claims both on convergence and exponential stability.展开更多
Fuzzy C-means(FCM)is a clustering method that falls under unsupervised machine learning.The main issues plaguing this clustering algorithm are the number of the unknown clusters within a particular dataset and initial...Fuzzy C-means(FCM)is a clustering method that falls under unsupervised machine learning.The main issues plaguing this clustering algorithm are the number of the unknown clusters within a particular dataset and initialization sensitivity of cluster centres.Artificial Bee Colony(ABC)is a type of swarm algorithm that strives to improve the members’solution quality as an iterative process with the utilization of particular kinds of randomness.However,ABC has some weaknesses,such as balancing exploration and exploitation.To improve the exploration process within the ABC algorithm,the mean artificial bee colony(MeanABC)by its modified search equation that depends on solutions of mean previous and global best is used.Furthermore,to solve the main issues of FCM,Automatic clustering algorithm was proposed based on the mean artificial bee colony called(AC-MeanABC).It uses the MeanABC capability of balancing between exploration and exploitation and its capacity to explore the positive and negative directions in search space to find the best value of clusters number and centroids value.A few benchmark datasets and a set of natural images were used to evaluate the effectiveness of AC-MeanABC.The experimental findings are encouraging and indicate considerable improvements compared to other state-of-the-art approaches in the same domain.展开更多
A new matting algorithm based on color distance and differential distance is proposed to deal with the problem that many matting methods perform poorly with complex natural images.The proposed method combines local sa...A new matting algorithm based on color distance and differential distance is proposed to deal with the problem that many matting methods perform poorly with complex natural images.The proposed method combines local sampling with global sampling to select foreground and background pairs for unknown pixels and then a new cost function is constructed based on color distance and differential distance to further optimize the selected sample pairs.Finally,a quadratic objective function is used based on matte Laplacian coming from KNN matting which is added with texture feature.Through experiments on various test images,it is confirmed that the results obtained by the proposed method are more accurate than those obtained by traditional methods.The four-error-metrics comparison on benchmark dataset among several algorithms also proves the effectiveness of the proposed method.展开更多
Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embod- ied in text is very useful in a wide range of...Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embod- ied in text is very useful in a wide range of vision-based ap- plications, therefore text detection and recognition in natu- ral scenes have become important and active research topics in computer vision and document analysis. Especially in re- cent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and varia- tion) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art al- gorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.展开更多
文摘Due to improper acquisition settings and other noise artifacts,the image degraded to yield poor mean preservation in brightness.The simplest way to improve the preservation is the implementation of histogram equalization.Because of over-enhancement,it failed to preserve the mean brightness and produce the poor quality of the image.This paper proposes a multi-scale decomposi-tion for brightness preservation using gamma correction.After transformation to hue,saturation and intensity(HSI)channel,the 2D-discrete wavelet transform decomposed the intensity component into low and high-pass coefficients.At the next phase,gamma correction is used by auto-tuning the scale value.The scale is the modified constant value used in the logarithmic function.Further,the scale value is optimized to obtain better visual quality in the image.The optimized value is the weighted distribution of standard deviation-mean of low pass coefficients.Finally,the experimental result is estimated in terms of quality assessment measures used as absolute mean brightness error,the measure of information detail,signal to noise ratio and patch-based contrast quality in the image.By comparison,the proposed method proved to be suitably remarkable in retaining the mean brightness and better visual quality of the image.
基金supported by National Natural Science Foundation of China(62072250).
文摘With the development of computer graphics,realistic computer graphics(CG)have become more and more common in our field of vision.This rendered image is invisible to the naked eye.How to effectively identify CG and natural images(NI)has been become a new issue in the field of digital forensics.In recent years,a series of deep learning network frameworks have shown great advantages in the field of images,which provides a good choice for us to solve this problem.This paper aims to track the latest developments and applications of deep learning in the field of CG and NI forensics in a timely manner.Firstly,it introduces the background of deep learning and the knowledge of convolutional neural networks.The purpose is to understand the basic model structure of deep learning applications in the image field,and then outlines the mainstream framework;secondly,it briefly introduces the application of deep learning in CG and NI forensics,and finally points out the problems of deep learning in this field and the prospects for the future.
文摘In this paper, we conduct research on the novel natural image reconstruction and representation algorithm based on clustenng and modified neural network. Image resolution enhancement is one of the earliest researches of single image interpolation. Although the traditional interpolation and method for single image amplification is effect, but did not provide more useful information. Our method combines the neural network and the clustering approach. The experiment shows that our method performs well and satisfactory.
基金Supported by the National Natural Science Foundation of China (Grant Nos.60633030 and 90604008)National Basic Rearch Program of China (Grant No.2006CB303104)
文摘Rendering technology in computer graphics (CG) is now capable of producing highly photorealistic images, giving rise to the problem of how to identify CG images from natural images. Some methods were proposed to solve this problem. In this paper, we give a novel method from a new point of view of image perception. Although the photorealistic CG images are very similar to natural images, they are surrealistic and smoother than natural images, thus leading to the difference in perception. A pert of features are derived from fractal dimension to capture the difference in color perception between CG images and natural images, and several generalized dimensions are used as the rest features to capture difference in coarseness. The effect of these features is verified by experiments. The average accuracy is over 91.2%.
基金supported by the National Key Research and Development Program of China under Grant No.2019YFB2204104the Beijing Natural Science Foundation of China under Grant No.L182059+1 种基金the National Natural Science Foundation of China under Grant Nos.61772523,61620106003,and 61802406,Alibaba Group through Alibaba Innovative Research Programthe Joint Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering and Tsinghua-Ningxia Yinchuan Joint Institute of Internet of Waters on Digital Water Governance.
文摘With the recent tremendous advances of computer graphics rendering and image editing technologies,computergenerated fake images,which in general do not reflect what happens in the reality,can now easily deceive the inspection of human visual system.In this work,we propose a convolutional neural network(CNN)-based model to distinguish computergenerated(CG)images from natural images(NIs)with channel and pixel correlation.The key component of the proposed CNN architecture is a self-coding module that takes the color images as input to extract the correlation between color channels explicitly.Unlike previous approaches that directly apply CNN to solve this problem,we consider the generality of the network(or subnetwork),i.e.,the newly introduced hybrid correlation module can be directly combined with existing CNN models for enhancing the discrimination capacity of original networks.Experimental results demonstrate that the proposed network outperforms state-of-the-art methods in terms of classification performance.We also show that the newly introduced hybrid correlation module can improve the classification accuracy of different CNN architectures.
基金This work was supported by the National Natural Science Foundation of China under Grant No. 600330107 Zhejiang Provincial Natural Science Foundation of China under Grant No, Y105324 and Planned Program of Science and Technology Department of Zhejiang Province, China (Grant No. 2006C31065),
文摘This paper proposes a Maxkov Random Field (MRF) model-based approach to natural image matting with complex scenes. After the trimap for matting is given manually, the unknown region is roughly segmented into several joint sub-regions. In each sub-region, we partition the colors of neighboring background or foreground pixels into several clusters in RGB color space and assign matting label to each unknown pixel. All the labels are modelled as an MRF and the matting problem is then formulated as a maximum a posteriori (MAP) estimation problem. Simulated annealing is used to find the optimal MAP estimation. The better results can be obtained under the same user-interactions when images are complex. Results of natural image matting experiments performed on complex images using this approach are shown and compared in this paper.
文摘This essay presents a preliminary examination of the use of natural imagery in the Inner Chapters of the Zhuangzi. The essay analyzes the characteristics of this imagery and the possible reasons for Zhuangzi's particular use of fantastic language, exaggeration, and fiction in connection with natural images. In doing so, I also compare the Zhuangzi with several important classical Chinese and Greek texts. In addition, I explore the possible motivations for and influence exerted by Zhuangzi's use of natural imagery. The essay aims to demonstrate that an analysis of the roles played by natural imagery in the Zhuangzi contributes significantly to our understanding of the work as a whole, as well as its abiding influence on later works.
文摘In today’s real world, an important research part in image processing isscene text detection and recognition. Scene text can be in different languages,fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios andlayouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are consideredfor the text in natural scenes. In this paper, a new intelligent text detection andrecognition method for detectingthe text from natural scenes and forrecognizingthe text by applying the newly proposed Conditional Random Field-based fuzzyrules incorporated Convolutional Neural Network (CR-CNN) has been proposed.Moreover, we have recommended a new text detection method for detecting theexact text from the input natural scene images. For enhancing the presentation ofthe edge detection process, image pre-processing activities such as edge detectionand color modeling have beenapplied in this work. In addition, we have generatednew fuzzy rules for making effective decisions on the processes of text detectionand recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR2005 and the SVT and have achieved better detection accuracy intext detectionand recognition. By using these three datasets, five different experiments havebeen conducted for evaluating the proposed model. And also, we have comparedthe proposed system with the other classifiers such as the SVM, the MLP and theCNN. In these comparisons, the proposed model has achieved better classificationaccuracywhen compared with the other existing works.
基金The National Natural Science Foundation ofChina(No.60372076)
文摘There are many detectors for the least significant bit(LSB)steganography which is broadly used in hiding information in the digital images.The length of the hidden information is one of the most important parameters in detecting steganographic information.Using 2-D gradient of a pixel and the distance between variables the proposed method gives the length of hidden information in natural grayscale images without original image.Extensive experimental results show good performance even at low embedding rate compared with other methods.Furthermore,the proposed method also works well disregarding the status of the embedded information.
基金This work was funded by the Deanship of Scientific Research at Jouf University(Kingdom of Saudi Arabia)under Grant No.DSR-2021-02-0392.
文摘Detecting and recognizing text from natural scene images presents a challenge because the image quality depends on the conditions in which the image is captured,such as viewing angles,blurring,sensor noise,etc.However,in this paper,a prototype for text detection and recognition from natural scene images is proposed.This prototype is based on the Raspberry Pi 4 and the Universal Serial Bus(USB)camera and embedded our text detection and recognition model,which was developed using the Python language.Our model is based on the deep learning text detector model through the Efficient and Accurate Scene Text Detec-tor(EAST)model for text localization and detection and the Tesseract-OCR,which is used as an Optical Character Recognition(OCR)engine for text recog-nition.Our prototype is controlled by the Virtual Network Computing(VNC)tool through a computer via a wireless connection.The experiment results show that the recognition rate for the captured image through the camera by our prototype can reach 99.75%with low computational complexity.Furthermore,our proto-type is more performant than the Tesseract software in terms of the recognition rate.Besides,it provides the same performance in terms of the recognition rate with a huge decrease in the execution time by an average of 89%compared to the EasyOCR software on the Raspberry Pi 4 board.
基金Foundation item: Projects(60835005, 90820302) supported by the National Natural Science Foundation of China Project(2007CB311001) supported by the National Basic Research Program of China
文摘Globally exponential stability (which implies convergence and uniqueness) of their classical iterative algorithm is established using methods of heat equations and energy integral after embedding the discrete iteration into a continuous flow. The stability condition depends explicitly on smoothness of the image sequence, size of image domain, value of the regularization parameter, and finally discretization step. Specifically, as the discretization step approaches to zero, stability holds unconditionally. The analysis also clarifies relations among the iterative algorithm, the original variation formulation and the PDE system. The proper regularity of solution and natural images is briefly surveyed and discussed. Experimental results validate the theoretical claims both on convergence and exponential stability.
基金supported by the Research Management Center,Xiamen University Malaysia under XMUM Research Program Cycle 4(Grant No:XMUMRF/2019-C4/IECE/0012).
文摘Fuzzy C-means(FCM)is a clustering method that falls under unsupervised machine learning.The main issues plaguing this clustering algorithm are the number of the unknown clusters within a particular dataset and initialization sensitivity of cluster centres.Artificial Bee Colony(ABC)is a type of swarm algorithm that strives to improve the members’solution quality as an iterative process with the utilization of particular kinds of randomness.However,ABC has some weaknesses,such as balancing exploration and exploitation.To improve the exploration process within the ABC algorithm,the mean artificial bee colony(MeanABC)by its modified search equation that depends on solutions of mean previous and global best is used.Furthermore,to solve the main issues of FCM,Automatic clustering algorithm was proposed based on the mean artificial bee colony called(AC-MeanABC).It uses the MeanABC capability of balancing between exploration and exploitation and its capacity to explore the positive and negative directions in search space to find the best value of clusters number and centroids value.A few benchmark datasets and a set of natural images were used to evaluate the effectiveness of AC-MeanABC.The experimental findings are encouraging and indicate considerable improvements compared to other state-of-the-art approaches in the same domain.
基金Supported by the National Natural Science Foundation of China(No.61133009,U1304616)
文摘A new matting algorithm based on color distance and differential distance is proposed to deal with the problem that many matting methods perform poorly with complex natural images.The proposed method combines local sampling with global sampling to select foreground and background pairs for unknown pixels and then a new cost function is constructed based on color distance and differential distance to further optimize the selected sample pairs.Finally,a quadratic objective function is used based on matte Laplacian coming from KNN matting which is added with texture feature.Through experiments on various test images,it is confirmed that the results obtained by the proposed method are more accurate than those obtained by traditional methods.The four-error-metrics comparison on benchmark dataset among several algorithms also proves the effectiveness of the proposed method.
文摘Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embod- ied in text is very useful in a wide range of vision-based ap- plications, therefore text detection and recognition in natu- ral scenes have become important and active research topics in computer vision and document analysis. Especially in re- cent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and varia- tion) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art al- gorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.