With a population of 440 million,Arabic language users form the rapidly growing language group on the web in terms of the number of Internet users.11 million monthly Twitter users were active and posted nearly 27.4 mi...With a population of 440 million,Arabic language users form the rapidly growing language group on the web in terms of the number of Internet users.11 million monthly Twitter users were active and posted nearly 27.4 million tweets every day.In order to develop a classification system for the Arabic lan-guage there comes a need of understanding the syntactic framework of the words thereby manipulating and representing the words for making their classification effective.In this view,this article introduces a Dolphin Swarm Optimization with Convolutional Deep Belief Network for Short Text Classification(DSOCDBN-STC)model on Arabic Corpus.The presented DSOCDBN-STC model majorly aims to classify Arabic short text in social media.The presented DSOCDBN-STC model encompasses preprocessing and word2vec word embedding at the preliminary stage.Besides,the DSOCDBN-STC model involves CDBN based classification model for Arabic short text.At last,the DSO technique can be exploited for optimal modification of the hyperparameters related to the CDBN method.To establish the enhanced performance of the DSOCDBN-STC model,a wide range of simulations have been performed.The simulation results con-firmed the supremacy of the DSOCDBN-STC model over existing models with improved accuracy of 99.26%.展开更多
The historical and cultural districts of a city serve as important cultural heritage and tourism resources.This paper focused on four such districts in Yangzhou and performed semantic analysis on online public comment...The historical and cultural districts of a city serve as important cultural heritage and tourism resources.This paper focused on four such districts in Yangzhou and performed semantic analysis on online public comments using ROST CM6 software.According to the high frequency words,attention preference of district site elements,activities and feelings in Yangzhou historical and cultural districts were analyzed.Through the analysis of semantic network and public emotional tendency,the relationship between the protection and utilization of Yangzhou historical and cultural districts and the perception and demand of users were discussed,and some suggestions for the protection,utilization and renewal of historical and cultural districts were put forward.展开更多
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t...ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.展开更多
Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted vid...Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.展开更多
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu...Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.展开更多
Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal ...Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.展开更多
To remove handwritten texts from an image of a document taken by smart phone,an intelligent removal method was proposed that combines dewarping and Fully Convolutional Network with Atrous Convolutional and Atrous Spat...To remove handwritten texts from an image of a document taken by smart phone,an intelligent removal method was proposed that combines dewarping and Fully Convolutional Network with Atrous Convolutional and Atrous Spatial Pyramid Pooling(FCN-AC-ASPP).For a picture taken by a smart phone,firstly,the image is transformed into a regular image by the dewarping algorithm.Secondly,the FCN-AC-ASPP is used to classify printed texts and handwritten texts.Lastly,handwritten texts can be removed by a simple algorithm.Experiments show that the classification accuracy of the FCN-AC-ASPP is better than FCN,DeeplabV3+,FCN-AC.For handwritten texts removal effect,the method of combining dewarping and FCN-AC-ASPP is superior to FCN-AC-ASP alone.展开更多
To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved a...To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved access to information on various Sexual Reproductive Health topics through Short Messaging Service (SMS) messages. Over the years, the platform has accumulated millions of incoming and outgoing messages, which need to be categorized into key thematic areas for better tracking of sexual reproductive health knowledge gaps among young people. The current manual categorization process of these text messages is inefficient and time-consuming and this study aims to automate the process for improved analysis using text-mining techniques. Firstly, the study investigates the current text message categorization process and identifies a list of categories adopted by counselors over time which are then used to build and train a categorization model. Secondly, the study presents a proof of concept tool that automates the categorization of U-report messages into key thematic areas using the developed categorization model. Finally, it compares the performance and effectiveness of the developed proof of concept tool against the manual system. The study used a dataset comprising 206,625 text messages. The current process would take roughly 2.82 years to categorise this dataset whereas the trained SVM model would require only 6.4 minutes while achieving an accuracy of 70.4% demonstrating that the automated method is significantly faster, more scalable, and consistent when compared to the current manual categorization. These advantages make the SVM model a more efficient and effective tool for categorizing large unstructured text datasets. These results and the proof-of-concept tool developed demonstrate the potential for enhancing the efficiency and accuracy of message categorization on the Zambia U-report platform and other similar text messages-based platforms.展开更多
The assessment of translation quality in political texts is primarily based on achieving effective communication.Throughout the translation process,it is essential to not only accurately convey the original content bu...The assessment of translation quality in political texts is primarily based on achieving effective communication.Throughout the translation process,it is essential to not only accurately convey the original content but also effectively transform the structural mechanisms of the source language.In the translation reconstruction of political texts,various textual cohesion methods are often employed,with conjunctions serving as a primary means for semantic coherence within text units.展开更多
Objective To discuss how to use social media data for post-marketing drug safety monitoring in China as soon as possible by systematically combing the text mining applications,and to provide new ideas and methods for ...Objective To discuss how to use social media data for post-marketing drug safety monitoring in China as soon as possible by systematically combing the text mining applications,and to provide new ideas and methods for pharmacovigilance.Methods Relevant domestic and foreign literature was used to explore text classification based on machine learning,text mining based on deep learning(neural networks)and adverse drug reaction(ADR)terminology.Results and Conclusion Text classification based on traditional machine learning mainly include support vector machine(SVM)algorithm,naive Bayesian(NB)classifier,decision tree,hidden Markov model(HMM)and bidirectional en-coder representations from transformers(BERT).The main neural network text mining based on deep learning are convolution neural network(CNN),recurrent neural network(RNN)and long short-term memory(LSTM).ADR terminology standardization tools mainly include“Medical Dictionary for Regulatory Activities”(MedDRA),“WHODrug”and“Systematized Nomenclature of Medicine-Clinical Terms”(SNOMED CT).展开更多
To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combine...To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combined by this method, and then the results after combination are made pooling operation, three sorts of CNN models(we named TBCNN, MCT-CNN and MMCT-CNN respectively) are constructed and then corresponding algorithmic thought are detailed on this basis. Secondly, relevant experiments and analyses are respectively designed to show the effects of three key parameters(convolution kernel, combination kernel number and word embedding) on three kinds of CNN models and to further demonstrate the effect of the models proposed. The experimental results show that compared with the traditional method of text classification in CNNs, term-based pooling method is addressed that not only the availability of the way is proved, but also the performance shows good superiority.展开更多
Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo...Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.展开更多
With the explosive growth of Internet text information,the task of text classification is more important.As a part of text classification,Chinese news text classification also plays an important role.In public securit...With the explosive growth of Internet text information,the task of text classification is more important.As a part of text classification,Chinese news text classification also plays an important role.In public security work,public opinion news classification is an important topic.Effective and accurate classification of public opinion news is a necessary prerequisite for relevant departments to grasp the situation of public opinion and control the trend of public opinion in time.This paper introduces a combinedconvolutional neural network text classification model based on word2vec and improved TF-IDF:firstly,the word vector is trained through word2vec model,then the weight of each word is calculated by using the improved TFIDF algorithm based on class frequency variance,and the word vector and weight are combined to construct the text vector representation.Finally,the combined-convolutional neural network is used to train and test the Thucnews data set.The results show that the classification effect of this model is better than the traditional Text-RNN model,the traditional Text-CNN model and word2vec-CNN model.The test accuracy is 97.56%,the accuracy rate is 97%,the recall rate is 97%,and the F1-score is 97%.展开更多
With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension ...With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence展开更多
Reading and writing are the main interaction methods with web content.Text simplification tools are helpful for people with cognitive impairments,new language learners,and children as they might find difficulties in u...Reading and writing are the main interaction methods with web content.Text simplification tools are helpful for people with cognitive impairments,new language learners,and children as they might find difficulties in understanding the complex web content.Text simplification is the process of changing complex text intomore readable and understandable text.The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences.In this paper,we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art(SOTA)results in machine translation.The training process for our model includes three steps:preprocessing the data using a subword tokenizer,training the model and optimizing it using the Adam optimizer,then using the model to decode the output.The first model uses the transformer only and the second model uses and integrates the Bidirectional Encoder Representations from Transformer(BERT)as encoder to enhance the training time and results.The performance of the proposed model using the transformerwas evaluated using the Bilingual Evaluation Understudy score(BLEU)and recorded(53.78)on the WikiSmall dataset.On the other hand,the experiment on the second model which is integrated with BERT shows that the validation loss decreased very fast compared with the model without the BERT.However,the BLEU score was small(44.54),which could be due to the size of the dataset so the model was overfitting and unable to generalize well.Therefore,in the future,the second model could involve experimenting with a larger dataset such as the WikiLarge.In addition,more analysis has been done on the model’s results and the used dataset using different evaluation metrics to understand their performance.展开更多
Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,...Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.展开更多
Handwriting recognition is a challenge that interests many researchers around the world.As an exception,handwritten Arabic script has many objectives that remain to be overcome,given its complex form,their number of f...Handwriting recognition is a challenge that interests many researchers around the world.As an exception,handwritten Arabic script has many objectives that remain to be overcome,given its complex form,their number of forms which exceeds 100 and its cursive nature.Over the past few years,good results have been obtained,but with a high cost of memory and execution time.In this paper we propose to improve the capacity of bidirectional gated recurrent unit(BGRU)to recognize Arabic text.The advantages of using BGRUs is the execution time compared to other methods that can have a high success rate but expensive in terms of time andmemory.To test the recognition capacity of BGRU,the proposed architecture is composed by 6 convolutional neural network(CNN)blocks for feature extraction and 1 BGRU+2 dense layers for learning and test.The experiment is carried out on the entire database of institut für nachrichtentechnik/ecole nationale d’ingénieurs de Tunis(IFN/ENIT)without any preprocessing or data selection.The obtained results show the ability of BGRUs to recognize handwritten Arabic script.展开更多
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR40.
文摘With a population of 440 million,Arabic language users form the rapidly growing language group on the web in terms of the number of Internet users.11 million monthly Twitter users were active and posted nearly 27.4 million tweets every day.In order to develop a classification system for the Arabic lan-guage there comes a need of understanding the syntactic framework of the words thereby manipulating and representing the words for making their classification effective.In this view,this article introduces a Dolphin Swarm Optimization with Convolutional Deep Belief Network for Short Text Classification(DSOCDBN-STC)model on Arabic Corpus.The presented DSOCDBN-STC model majorly aims to classify Arabic short text in social media.The presented DSOCDBN-STC model encompasses preprocessing and word2vec word embedding at the preliminary stage.Besides,the DSOCDBN-STC model involves CDBN based classification model for Arabic short text.At last,the DSO technique can be exploited for optimal modification of the hyperparameters related to the CDBN method.To establish the enhanced performance of the DSOCDBN-STC model,a wide range of simulations have been performed.The simulation results con-firmed the supremacy of the DSOCDBN-STC model over existing models with improved accuracy of 99.26%.
基金the Open Project of China Grand Canal Research Institute,Yangzhou University(DYH202211)Jiangsu Provincial Social Science Applied Research Excellent Project(22SYB-053).
文摘The historical and cultural districts of a city serve as important cultural heritage and tourism resources.This paper focused on four such districts in Yangzhou and performed semantic analysis on online public comments using ROST CM6 software.According to the high frequency words,attention preference of district site elements,activities and feelings in Yangzhou historical and cultural districts were analyzed.Through the analysis of semantic network and public emotional tendency,the relationship between the protection and utilization of Yangzhou historical and cultural districts and the perception and demand of users were discussed,and some suggestions for the protection,utilization and renewal of historical and cultural districts were put forward.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4210118DSR33The authors are thankful to the Deanship of ScientificResearch atNajranUniversity for funding thiswork under theResearch Groups Funding Program Grant Code(NU/RG/SERC/11/7).
文摘ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.
基金This work is supported in part by the National Natural Science Foundation of China(Grant Number 61971078)which provided domain expertise and computational power that greatly assisted the activity+1 种基金This work was financially supported by Chongqing Municipal Education Commission Grants forMajor Science and Technology Project(KJZD-M202301901)the Science and Technology Research Project of Jiangxi Department of Education(GJJ2201049).
文摘Text perception is crucial for understanding the semantics of outdoor scenes,making it a key requirement for building intelligent systems for driver assistance or autonomous driving.Text information in car-mounted videos can assist drivers in making decisions.However,Car-mounted video text images pose challenges such as complex backgrounds,small fonts,and the need for real-time detection.We proposed a robust Car-mounted Video Text Detector(CVTD).It is a lightweight text detection model based on ResNet18 for feature extraction,capable of detecting text in arbitrary shapes.Our model efficiently extracted global text positions through the Coordinate Attention Threshold Activation(CATA)and enhanced the representation capability through stacking two Feature Pyramid Enhancement Fusion Modules(FPEFM),strengthening feature representation,and integrating text local features and global position information,reinforcing the representation capability of the CVTD model.The enhanced feature maps,when acted upon by Text Activation Maps(TAM),effectively distinguished text foreground from non-text regions.Additionally,we collected and annotated a dataset containing 2200 images of Car-mounted Video Text(CVT)under various road conditions for training and evaluating our model’s performance.We further tested our model on four other challenging public natural scene text detection benchmark datasets,demonstrating its strong generalization ability and real-time detection speed.This model holds potential for practical applications in real-world scenarios.
文摘Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.
基金the National Natural Science Foundation of PRChina(42075130)Nari Technology Co.,Ltd.(4561655965)。
文摘Scene text detection is an important task in computer vision.In this paper,we present YOLOv5 Scene Text(YOLOv5ST),an optimized architecture based on YOLOv5 v6.0 tailored for fast scene text detection.Our primary goal is to enhance inference speed without sacrificing significant detection accuracy,thereby enabling robust performance on resource-constrained devices like drones,closed-circuit television cameras,and other embedded systems.To achieve this,we propose key modifications to the network architecture to lighten the original backbone and improve feature aggregation,including replacing standard convolution with depth-wise convolution,adopting the C2 sequence module in place of C3,employing Spatial Pyramid Pooling Global(SPPG)instead of Spatial Pyramid Pooling Fast(SPPF)and integrating Bi-directional Feature Pyramid Network(BiFPN)into the neck.Experimental results demonstrate a remarkable 26%improvement in inference speed compared to the baseline,with only marginal reductions of 1.6%and 4.2%in mean average precision(mAP)at the intersection over union(IoU)thresholds of 0.5 and 0.5:0.95,respectively.Our work represents a significant advancement in scene text detection,striking a balance between speed and accuracy,making it well-suited for performance-constrained environments.
基金Sponsored by the Scientific Research Project of Zhejiang Provincial Department of Education(Grant No.KYY-ZX-20210329).
文摘To remove handwritten texts from an image of a document taken by smart phone,an intelligent removal method was proposed that combines dewarping and Fully Convolutional Network with Atrous Convolutional and Atrous Spatial Pyramid Pooling(FCN-AC-ASPP).For a picture taken by a smart phone,firstly,the image is transformed into a regular image by the dewarping algorithm.Secondly,the FCN-AC-ASPP is used to classify printed texts and handwritten texts.Lastly,handwritten texts can be removed by a simple algorithm.Experiments show that the classification accuracy of the FCN-AC-ASPP is better than FCN,DeeplabV3+,FCN-AC.For handwritten texts removal effect,the method of combining dewarping and FCN-AC-ASPP is superior to FCN-AC-ASP alone.
文摘To promote behavioral change among adolescents in Zambia, the National HIV/AIDS/STI/TB Council, in collaboration with UNICEF, developed the Zambia U-Report platform. This platform provides young people with improved access to information on various Sexual Reproductive Health topics through Short Messaging Service (SMS) messages. Over the years, the platform has accumulated millions of incoming and outgoing messages, which need to be categorized into key thematic areas for better tracking of sexual reproductive health knowledge gaps among young people. The current manual categorization process of these text messages is inefficient and time-consuming and this study aims to automate the process for improved analysis using text-mining techniques. Firstly, the study investigates the current text message categorization process and identifies a list of categories adopted by counselors over time which are then used to build and train a categorization model. Secondly, the study presents a proof of concept tool that automates the categorization of U-report messages into key thematic areas using the developed categorization model. Finally, it compares the performance and effectiveness of the developed proof of concept tool against the manual system. The study used a dataset comprising 206,625 text messages. The current process would take roughly 2.82 years to categorise this dataset whereas the trained SVM model would require only 6.4 minutes while achieving an accuracy of 70.4% demonstrating that the automated method is significantly faster, more scalable, and consistent when compared to the current manual categorization. These advantages make the SVM model a more efficient and effective tool for categorizing large unstructured text datasets. These results and the proof-of-concept tool developed demonstrate the potential for enhancing the efficiency and accuracy of message categorization on the Zambia U-report platform and other similar text messages-based platforms.
基金This article is a phased achievement of the 2020 research project“Research on Chinese-Russian Translation of Political Terminology Based on Corpora”(YB2020005)by CNTERM.
文摘The assessment of translation quality in political texts is primarily based on achieving effective communication.Throughout the translation process,it is essential to not only accurately convey the original content but also effectively transform the structural mechanisms of the source language.In the translation reconstruction of political texts,various textual cohesion methods are often employed,with conjunctions serving as a primary means for semantic coherence within text units.
文摘Objective To discuss how to use social media data for post-marketing drug safety monitoring in China as soon as possible by systematically combing the text mining applications,and to provide new ideas and methods for pharmacovigilance.Methods Relevant domestic and foreign literature was used to explore text classification based on machine learning,text mining based on deep learning(neural networks)and adverse drug reaction(ADR)terminology.Results and Conclusion Text classification based on traditional machine learning mainly include support vector machine(SVM)algorithm,naive Bayesian(NB)classifier,decision tree,hidden Markov model(HMM)and bidirectional en-coder representations from transformers(BERT).The main neural network text mining based on deep learning are convolution neural network(CNN),recurrent neural network(RNN)and long short-term memory(LSTM).ADR terminology standardization tools mainly include“Medical Dictionary for Regulatory Activities”(MedDRA),“WHODrug”and“Systematized Nomenclature of Medicine-Clinical Terms”(SNOMED CT).
文摘To achieve good results in convolutional neural networks(CNN) for text classification task, term-based pooling operation in CNNs is proposed. Firstly, the convolution results of several convolution kernels are combined by this method, and then the results after combination are made pooling operation, three sorts of CNN models(we named TBCNN, MCT-CNN and MMCT-CNN respectively) are constructed and then corresponding algorithmic thought are detailed on this basis. Secondly, relevant experiments and analyses are respectively designed to show the effects of three key parameters(convolution kernel, combination kernel number and word embedding) on three kinds of CNN models and to further demonstrate the effect of the models proposed. The experimental results show that compared with the traditional method of text classification in CNNs, term-based pooling method is addressed that not only the availability of the way is proved, but also the performance shows good superiority.
基金Fundamental Research Funds for the Central University,China(No.2232018D3-17)。
文摘Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.
基金This work was supported by Ministry of public security technology research program[Grant No.2020JSYJC22ok]Fundamental Research Funds for the Central Universities(No.2021JKF215)+1 种基金Open Research Fund of the Public Security Behavioral Science Laboratory,People’s Public Security University of China(2020SYS03)Police and people build/share a smart community(PJ13-201912-0525).
文摘With the explosive growth of Internet text information,the task of text classification is more important.As a part of text classification,Chinese news text classification also plays an important role.In public security work,public opinion news classification is an important topic.Effective and accurate classification of public opinion news is a necessary prerequisite for relevant departments to grasp the situation of public opinion and control the trend of public opinion in time.This paper introduces a combinedconvolutional neural network text classification model based on word2vec and improved TF-IDF:firstly,the word vector is trained through word2vec model,then the weight of each word is calculated by using the improved TFIDF algorithm based on class frequency variance,and the word vector and weight are combined to construct the text vector representation.Finally,the combined-convolutional neural network is used to train and test the Thucnews data set.The results show that the classification effect of this model is better than the traditional Text-RNN model,the traditional Text-CNN model and word2vec-CNN model.The test accuracy is 97.56%,the accuracy rate is 97%,the recall rate is 97%,and the F1-score is 97%.
文摘With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence
文摘Reading and writing are the main interaction methods with web content.Text simplification tools are helpful for people with cognitive impairments,new language learners,and children as they might find difficulties in understanding the complex web content.Text simplification is the process of changing complex text intomore readable and understandable text.The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences.In this paper,we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art(SOTA)results in machine translation.The training process for our model includes three steps:preprocessing the data using a subword tokenizer,training the model and optimizing it using the Adam optimizer,then using the model to decode the output.The first model uses the transformer only and the second model uses and integrates the Bidirectional Encoder Representations from Transformer(BERT)as encoder to enhance the training time and results.The performance of the proposed model using the transformerwas evaluated using the Bilingual Evaluation Understudy score(BLEU)and recorded(53.78)on the WikiSmall dataset.On the other hand,the experiment on the second model which is integrated with BERT shows that the validation loss decreased very fast compared with the model without the BERT.However,the BLEU score was small(44.54),which could be due to the size of the dataset so the model was overfitting and unable to generalize well.Therefore,in the future,the second model could involve experimenting with a larger dataset such as the WikiLarge.In addition,more analysis has been done on the model’s results and the used dataset using different evaluation metrics to understand their performance.
基金This work is partially supported by the National Natural Science Foundation of China under Grant Nos.61876205 and 61877013the Ministry of Education of Humanities and Social Science project under Grant Nos.19YJAZH128 and 20YJAZH118+1 种基金the Science and Technology Plan Project of Guangzhou under Grant No.201804010433the Bidding Project of Laboratory of Language Engineering and Computing under Grant No.LEC2017ZBKT001.
文摘Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.
基金This research was funded by the Deanship of the Scientific Research of the University of Ha’il,Saudi Arabia(Project:RG-20075).
文摘Handwriting recognition is a challenge that interests many researchers around the world.As an exception,handwritten Arabic script has many objectives that remain to be overcome,given its complex form,their number of forms which exceeds 100 and its cursive nature.Over the past few years,good results have been obtained,but with a high cost of memory and execution time.In this paper we propose to improve the capacity of bidirectional gated recurrent unit(BGRU)to recognize Arabic text.The advantages of using BGRUs is the execution time compared to other methods that can have a high success rate but expensive in terms of time andmemory.To test the recognition capacity of BGRU,the proposed architecture is composed by 6 convolutional neural network(CNN)blocks for feature extraction and 1 BGRU+2 dense layers for learning and test.The experiment is carried out on the entire database of institut für nachrichtentechnik/ecole nationale d’ingénieurs de Tunis(IFN/ENIT)without any preprocessing or data selection.The obtained results show the ability of BGRUs to recognize handwritten Arabic script.