To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combine...To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combined by this method, and then the results after combination are made pooling operation, three sorts of CNN models(we named TBCNN, MCT-CNN and MMCT-CNN respectively) are constructed and then corresponding algorithmic thought are detailed on this basis. Secondly, relevant experiments and analyses are respectively designed to show the effects of three key parameters(convolution kernel, combination kernel number and word embedding) on three kinds of CNN models and to further demonstrate the effect of the models proposed. The experimental results show that compared with the traditional method of text classification in CNNs, term-based pooling method is addressed that not only the availability of the way is proved, but also the performance shows good superiority.展开更多
With the high-speed development of the Internet,a growing number of Internet users like giving their subjective comments in the BBS,blog and shopping website.These comments contains critics’attitudes,emotions,views a...With the high-speed development of the Internet,a growing number of Internet users like giving their subjective comments in the BBS,blog and shopping website.These comments contains critics’attitudes,emotions,views and other information.Using these information reasonablely can help understand the social public opinion and make a timely response and help dealer to improve quality and service of products and make consumers know merchandise.This paper mainly discusses using convolutional neural network(CNN)for the operation of the text feature extraction.The concrete realization are discussed.Then combining with other text classifier make class operation.The experiment result shows the effectiveness of the method which is proposed in this paper.展开更多
In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,th...In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,thoughts,stories,advertisements,and news,among many other content types.With the recent increase in online content,the importance of identifying fake and real news has increased.Although,there is a lot of work present to detect fake news,a study on Fuzzy CRNN was not explored into this direction.In this work,a system is designed to classify fake and real news using fuzzy logic.The initial feature extraction process is done using a convolutional recurrent neural network(CRNN).After the extraction of features,word indexing is done with high dimensionality.Then,based on the indexing measures,the ranking process identifies whether news is fake or real.The fuzzy CRNN model is trained to yield outstanding resultswith 99.99±0.01%accuracy.This work utilizes three different datasets(LIAR,LIAR-PLUS,and ISOT)to find the most accurate model.展开更多
A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an e...A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.展开更多
English text sentiment orientation analysis is a fundamental problem in the field of natural language processing.The traditional word segmentation method can produce ambiguity when dealing with English text.Therefore,...English text sentiment orientation analysis is a fundamental problem in the field of natural language processing.The traditional word segmentation method can produce ambiguity when dealing with English text.Therefore,this paper proposes a novel English text sentiment analysis based on convolutional neural network and U-network.The proposed method uses a parallel convolution layer to learn the associations and combinations between word vectors.The results are then input into the hierarchical attention network whose basic unit is U-network to determine the affective tendency.The experimental results show that the accuracy of bias classification on the English review dataset reaches 93.45%.Compared with many existing sentiment analysis models,it has more accuracy.展开更多
Recognizing irregular text in natural images is a challenging task in computer vision.The existing approaches still face difficulties in recognizing irre-gular text because of its diverse shapes.In this paper,we propos...Recognizing irregular text in natural images is a challenging task in computer vision.The existing approaches still face difficulties in recognizing irre-gular text because of its diverse shapes.In this paper,we propose a simple yet powerful irregular text recognition framework based on an encoder-decoder archi-tecture.The proposed framework is divided into four main modules.Firstly,in the image transformation module,a Thin Plate Spline(TPS)transformation is employed to transform the irregular text image into a readable text image.Sec-ondly,we propose a novel Spatial Attention Module(SAM)to compel the model to concentrate on text regions and obtain enriched feature maps.Thirdly,a deep bi-directional long short-term memory(Bi-LSTM)network is used to make a con-textual feature map out of a visual feature map generated from a Convolutional Neural Network(CNN).Finally,we propose a Dual Step Attention Mechanism(DSAM)integrated with the Connectionist Temporal Classification(CTC)-Attention decoder to re-weights visual features and focus on the intra-sequence relationships to generate a more accurate character sequence.The effectiveness of our proposed framework is verified through extensive experiments on various benchmarks datasets,such as SVT,ICDAR,CUTE80,and IIIT5k.The perfor-mance of the proposed text recognition framework is analyzed with the accuracy metric.Demonstrate that our proposed method outperforms the existing approaches on both regular and irregular text.Additionally,the robustness of our approach is evaluated using the grocery datasets,such as GroZi-120,Web-Market,SKU-110K,and Freiburg Groceries datasets that contain complex text images.Still,our framework produces superior performance on grocery datasets.展开更多
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ...Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.展开更多
Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken langua...Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken language in the world.However,it is still challenging to recognize Urdu handwritten characters owing to their cursive nature.Our paper presents a Convolutional Neural Networks(CNN)model to recognize Urdu handwritten alphabet recognition(UHAR)offline and online characters.Our research contributes an Urdu handwritten dataset(aka UHDS)to empower future works in this field.For offline systems,optical readers are used for extracting the alphabets,while diagonal-based extraction methods are implemented in online systems.Moreover,our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition.To this end,we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums.Subsequently,we carried out detailed experiments for character recognition,as detailed in the results.The proposed CNN model outperformed as compared to previously published approaches.展开更多
The past decade has seen the rapid development of text detection based on deep learning.However,current methods of Chinese character detection and recognition have proven to be poor.The accuracy of segmenting text box...The past decade has seen the rapid development of text detection based on deep learning.However,current methods of Chinese character detection and recognition have proven to be poor.The accuracy of segmenting text boxes in natural scenes is not impressive.The reasons for this strait can be summarized into two points:the complexity of natural scenes and numerous types of Chinese characters.In response to these problems,we proposed a lightweight neural network architecture named CTSF.It consists of two modules,one is a text detection network that combines CTPN and the image feature extraction modules of PVANet,named CDSE.The other is a literacy network based on spatial pyramid pool and fusion of Chinese character skeleton features named SPPCNN-SF,so as to realize the text detection and recognition,respectively.Our model performs much better than the original model on ICDAR2011 and ICDAR2013(achieved 85%and 88%F-measures)and enhanced the processing speed in training phase.In addition,our method achieves extremely performance on three Chinese datasets,with accuracy of 95.12%,95.56%and 96.01%.展开更多
Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks...Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.展开更多
In recent years,Deep Learning models have become indispensable in several fields such as computer vision,automatic object recognition,and automatic natural language processing.The implementation of a robust and effici...In recent years,Deep Learning models have become indispensable in several fields such as computer vision,automatic object recognition,and automatic natural language processing.The implementation of a robust and efficient handwritten text recognition system remains a challenge for the research community in this field,especially for the Arabic language,which,compared to other languages,has a dearth of published works.In this work,we presented an efficient and new system for offline Arabic handwritten text recognition.Our new approach is based on the combination of a Convolutional Neural Network(CNN)and a Bidirectional Long-Term Memory(BLSTM)followed by a Connectionist Temporal Classification layer(CTC).Moreover,during the training phase of the model,we introduce an algorithm of data augmentation to increase the quality of data.Our proposed approach can recognize Arabic handwritten texts without the need to segment the characters,thus overcoming several problems related to this point.To train and test(evaluate)our approach,we used two Arabic handwritten text recognition databases,which are IFN/ENIT and KHATT.The Experimental results show that our new approach,compared to other methods in the literature,gives better results.展开更多
文摘To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combined by this method, and then the results after combination are made pooling operation, three sorts of CNN models(we named TBCNN, MCT-CNN and MMCT-CNN respectively) are constructed and then corresponding algorithmic thought are detailed on this basis. Secondly, relevant experiments and analyses are respectively designed to show the effects of three key parameters(convolution kernel, combination kernel number and word embedding) on three kinds of CNN models and to further demonstrate the effect of the models proposed. The experimental results show that compared with the traditional method of text classification in CNNs, term-based pooling method is addressed that not only the availability of the way is proved, but also the performance shows good superiority.
文摘With the high-speed development of the Internet,a growing number of Internet users like giving their subjective comments in the BBS,blog and shopping website.These comments contains critics’attitudes,emotions,views and other information.Using these information reasonablely can help understand the social public opinion and make a timely response and help dealer to improve quality and service of products and make consumers know merchandise.This paper mainly discusses using convolutional neural network(CNN)for the operation of the text feature extraction.The concrete realization are discussed.Then combining with other text classifier make class operation.The experiment result shows the effectiveness of the method which is proposed in this paper.
文摘In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,thoughts,stories,advertisements,and news,among many other content types.With the recent increase in online content,the importance of identifying fake and real news has increased.Although,there is a lot of work present to detect fake news,a study on Fuzzy CRNN was not explored into this direction.In this work,a system is designed to classify fake and real news using fuzzy logic.The initial feature extraction process is done using a convolutional recurrent neural network(CRNN).After the extraction of features,word indexing is done with high dimensionality.Then,based on the indexing measures,the ranking process identifies whether news is fake or real.The fuzzy CRNN model is trained to yield outstanding resultswith 99.99±0.01%accuracy.This work utilizes three different datasets(LIAR,LIAR-PLUS,and ISOT)to find the most accurate model.
基金Researchers would like to thank the Deanship of Scientific Research,Qassim University,for funding publication of this project.
文摘A tremendous amount of vendor invoices is generated in the corporate sector.To automate the manual data entry in payable documents,highly accurate Optical Character Recognition(OCR)is required.This paper proposes an end-to-end OCR system that does both localization and recognition and serves as a single unit to automate payable document processing such as cheques and cash disbursement.For text localization,the maximally stable extremal region is used,which extracts a word or digit chunk from an invoice.This chunk is later passed to the deep learning model,which performs text recognition.The deep learning model utilizes both convolution neural networks and long short-term memory(LSTM).The convolution layer is used for extracting features,which are fed to the LSTM.The model integrates feature extraction,modeling sequence,and transcription into a unified network.It handles the sequences of unconstrained lengths,independent of the character segmentation or horizontal scale normalization.Furthermore,it applies to both the lexicon-free and lexicon-based text recognition,and finally,it produces a comparatively smaller model,which can be implemented in practical applications.The overall superior performance in the experimental evaluation demonstrates the usefulness of the proposed model.The model is thus generic and can be used for other similar recognition scenarios.
文摘English text sentiment orientation analysis is a fundamental problem in the field of natural language processing.The traditional word segmentation method can produce ambiguity when dealing with English text.Therefore,this paper proposes a novel English text sentiment analysis based on convolutional neural network and U-network.The proposed method uses a parallel convolution layer to learn the associations and combinations between word vectors.The results are then input into the hierarchical attention network whose basic unit is U-network to determine the affective tendency.The experimental results show that the accuracy of bias classification on the English review dataset reaches 93.45%.Compared with many existing sentiment analysis models,it has more accuracy.
文摘Recognizing irregular text in natural images is a challenging task in computer vision.The existing approaches still face difficulties in recognizing irre-gular text because of its diverse shapes.In this paper,we propose a simple yet powerful irregular text recognition framework based on an encoder-decoder archi-tecture.The proposed framework is divided into four main modules.Firstly,in the image transformation module,a Thin Plate Spline(TPS)transformation is employed to transform the irregular text image into a readable text image.Sec-ondly,we propose a novel Spatial Attention Module(SAM)to compel the model to concentrate on text regions and obtain enriched feature maps.Thirdly,a deep bi-directional long short-term memory(Bi-LSTM)network is used to make a con-textual feature map out of a visual feature map generated from a Convolutional Neural Network(CNN).Finally,we propose a Dual Step Attention Mechanism(DSAM)integrated with the Connectionist Temporal Classification(CTC)-Attention decoder to re-weights visual features and focus on the intra-sequence relationships to generate a more accurate character sequence.The effectiveness of our proposed framework is verified through extensive experiments on various benchmarks datasets,such as SVT,ICDAR,CUTE80,and IIIT5k.The perfor-mance of the proposed text recognition framework is analyzed with the accuracy metric.Demonstrate that our proposed method outperforms the existing approaches on both regular and irregular text.Additionally,the robustness of our approach is evaluated using the grocery datasets,such as GroZi-120,Web-Market,SKU-110K,and Freiburg Groceries datasets that contain complex text images.Still,our framework produces superior performance on grocery datasets.
文摘Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.
基金This project was funded by the Deanship of Scientific Research(DSR),King Abdul-Aziz University,Jeddah,Saudi Arabia under Grant No.(RG-11-611-43).
文摘Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken language in the world.However,it is still challenging to recognize Urdu handwritten characters owing to their cursive nature.Our paper presents a Convolutional Neural Networks(CNN)model to recognize Urdu handwritten alphabet recognition(UHAR)offline and online characters.Our research contributes an Urdu handwritten dataset(aka UHDS)to empower future works in this field.For offline systems,optical readers are used for extracting the alphabets,while diagonal-based extraction methods are implemented in online systems.Moreover,our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition.To this end,we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums.Subsequently,we carried out detailed experiments for character recognition,as detailed in the results.The proposed CNN model outperformed as compared to previously published approaches.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘The past decade has seen the rapid development of text detection based on deep learning.However,current methods of Chinese character detection and recognition have proven to be poor.The accuracy of segmenting text boxes in natural scenes is not impressive.The reasons for this strait can be summarized into two points:the complexity of natural scenes and numerous types of Chinese characters.In response to these problems,we proposed a lightweight neural network architecture named CTSF.It consists of two modules,one is a text detection network that combines CTPN and the image feature extraction modules of PVANet,named CDSE.The other is a literacy network based on spatial pyramid pool and fusion of Chinese character skeleton features named SPPCNN-SF,so as to realize the text detection and recognition,respectively.Our model performs much better than the original model on ICDAR2011 and ICDAR2013(achieved 85%and 88%F-measures)and enhanced the processing speed in training phase.In addition,our method achieves extremely performance on three Chinese datasets,with accuracy of 95.12%,95.56%and 96.01%.
基金supported this project under the Fundamental Research Grant Scheme(FRGS)FRGS/1/2019/ICT02/UKM/02/9 entitled“Convolution Neural Network Enhancement Based on Adaptive Convexity and Regularization Functions for Fake Video Analytics”.This grant was received by Prof.Assis.Dr.S.N.H.Sheikh Abdullah,https://www.ukm.my/spifper/research_news/instrumentfunds.
文摘Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.
文摘In recent years,Deep Learning models have become indispensable in several fields such as computer vision,automatic object recognition,and automatic natural language processing.The implementation of a robust and efficient handwritten text recognition system remains a challenge for the research community in this field,especially for the Arabic language,which,compared to other languages,has a dearth of published works.In this work,we presented an efficient and new system for offline Arabic handwritten text recognition.Our new approach is based on the combination of a Convolutional Neural Network(CNN)and a Bidirectional Long-Term Memory(BLSTM)followed by a Connectionist Temporal Classification layer(CTC).Moreover,during the training phase of the model,we introduce an algorithm of data augmentation to increase the quality of data.Our proposed approach can recognize Arabic handwritten texts without the need to segment the characters,thus overcoming several problems related to this point.To train and test(evaluate)our approach,we used two Arabic handwritten text recognition databases,which are IFN/ENIT and KHATT.The Experimental results show that our new approach,compared to other methods in the literature,gives better results.