In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec...In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.展开更多
In recent years,researchers in handwriting recognition analysis relating to indigenous languages have gained significant internet among research communities.The recent developments of artificial intelligence(AI),natur...In recent years,researchers in handwriting recognition analysis relating to indigenous languages have gained significant internet among research communities.The recent developments of artificial intelligence(AI),natural language processing(NLP),and computational linguistics(CL)find useful in the analysis of regional low resource languages.Automatic lexical task participation might be elaborated to various applications in the NLP.It is apparent from the availability of effective machine recognition models and open access handwritten databases.Arabic language is a commonly spoken Semitic language,and it is written with the cursive Arabic alphabet from right to left.Arabic handwritten Character Recognition(HCR)is a crucial process in optical character recognition.In this view,this paper presents effective Computational linguistics with Deep Learning based Handwriting Recognition and Speech Synthesizer(CLDL-THRSS)for Indigenous Language.The presented CLDL-THRSS model involves two stages of operations namely automated handwriting recognition and speech recognition.Firstly,the automated handwriting recognition procedure involves preprocessing,segmentation,feature extraction,and classification.Also,the Capsule Network(CapsNet)based feature extractor is employed for the recognition of handwritten Arabic characters.For optimal hyperparameter tuning,the cuckoo search(CS)optimization technique was included to tune the parameters of the CapsNet method.Besides,deep neural network with hidden Markov model(DNN-HMM)model is employed for the automatic speech synthesizer.To validate the effective performance of the proposed CLDL-THRSS model,a detailed experimental validation process takes place and investigates the outcomes interms of different measures.The experimental outcomes denoted that the CLDL-THRSS technique has demonstrated the compared methods.展开更多
Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. ...Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy.展开更多
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho...Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.展开更多
Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process...Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.展开更多
This paper proposes a new way to improve the performance of dependency parser: subdividing verbs according to their grammatical functions and integrating the information of verb subclasses into lexicalized parsing mod...This paper proposes a new way to improve the performance of dependency parser: subdividing verbs according to their grammatical functions and integrating the information of verb subclasses into lexicalized parsing model. Firstly,the scheme of verb subdivision is described. Secondly,a maximum entropy model is presented to distinguish verb subclasses. Finally,a statistical parser is developed to evaluate the verb subdivision. Experimental results indicate that the use of verb subclasses has a good influence on parsing performance.展开更多
As a representative technique in natural language processing(NLP),named entity recognition is used in many tasks,such as dialogue systems,machine translation and information extraction.In dialogue systems,there is a c...As a representative technique in natural language processing(NLP),named entity recognition is used in many tasks,such as dialogue systems,machine translation and information extraction.In dialogue systems,there is a common case for named entity recognition,where a lot of entities are composed of numbers,and are segmented to be located in different places.For example,in multiple rounds of dialogue systems,a phone number is likely to be divided into several parts,because the phone number is usually long and is emphasized.In this paper,the entity consisting of numbers is named as number entity.The discontinuous positions of number entities result from many reasons.We find two reasons from real-world dialogue systems.The first reason is the repetitive confirmation of different components of a number entity,and the second reason is the interception of mood words.The extraction of number entities is quite useful in many tasks,such as user information completion and service requests correction.However,the existing entity extraction methods cannot extract entities consisting of discontinuous entity blocks.To address these problems,in this paper,we propose a comprehensive method for number entity recognition,which is capable of extracting number entities in multiple rounds of dialogues systems.We conduct extensive experiments on a real-world dataset,and the experimental results demonstrate the high performance of our method.展开更多
Purpose: The purpose of the study is to explore the potential use of nature language process(NLP) and machine learning(ML) techniques and intents to find a feasible strategy and effective approach to fulfill the NER t...Purpose: The purpose of the study is to explore the potential use of nature language process(NLP) and machine learning(ML) techniques and intents to find a feasible strategy and effective approach to fulfill the NER task for Web oriented person-specific information extraction.Design/methodology/approach: An SVM-based multi-classification approach combined with a set of rich NLP features derived from state-of-the-art NLP techniques has been proposed to fulfill the NER task. A group of experiments has been designed to investigate the influence of various NLP-based features to the performance of the system,especially the semantic features. Optimal parameter settings regarding with SVM models,including kernel functions,margin parameter of SVM model and the context window size,have been explored through experiments as well.Findings: The SVM-based multi-classification approach has been proved to be effective for the NER task. This work shows that NLP-based features are of great importance in datadriven NE recognition,particularly the semantic features. The study indicates that higher order kernel function may not be desirable for the specific classification problem in practical application. The simple linear-kernel SVM model performed better in this case. Moreover,the modified SVM models with uneven margin parameter are more common and flexible,which have been proved to solve the imbalanced data problem better.Research limitations/implications: The SVM-based approach for NER problem is only proved to be effective on limited experiment data. Further research need to be conducted on the large batch of real Web data. In addition,the performance of the NER system need be tested when incorporated into a complete IE framework.Originality/value: The specially designed experiments make it feasible to fully explore the characters of the data and obtain the optimal parameter settings for the NER task,leading to a preferable rate in recall,precision and F1measures. The overall system performance(F1value) for all types of name entities can achieve above 88.6%,which can meet the requirements for the practical application.展开更多
Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that...Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that are useful in processing and producing a language either in bulk or in a dialogue setting.Named Entity Recognition(NER)is a fundamental task in the data extraction process.It concentrates on identifying and labelling the atomic components from several texts grouped under different entities,such as organizations,people,places,and times.Further,the NER mechanism identifies and removes more types of entities as per the requirements.The significance of the NER mechanism has been well-established in Natural Language Processing(NLP)tasks,and various research investigations have been conducted to develop novel NER methods.The conventional ways of managing the tasks range from rule-related and hand-crafted feature-related Machine Learning(ML)techniques to Deep Learning(DL)techniques.In this aspect,the current study introduces a novel Dart Games Optimizer with Hybrid Deep Learning-Driven Computational Linguistics(DGOHDL-CL)model for NER.The presented DGOHDL-CL technique aims to determine and label the atomic components from several texts as a collection of the named entities.In the presented DGOHDL-CL technique,the word embed-ding process is executed at the initial stage with the help of the word2vec model.For the NER mechanism,the Convolutional Gated Recurrent Unit(CGRU)model is employed in this work.At last,the DGO technique is used as a hyperparameter tuning strategy for the CGRU algorithm to boost the NER’s outcomes.No earlier studies integrated the DGO mechanism with the CGRU model for NER.To exhibit the superiority of the proposed DGOHDL-CL technique,a widespread simulation analysis was executed on two datasets,CoNLL-2003 and OntoNotes 5.0.The experimental outcomes establish the promising performance of the DGOHDL-CL technique over other models.展开更多
针对当前大多数命名实体识别(NER)模型只使用字符级信息编码且缺乏对文本层次信息提取的问题,提出一种融合多粒度语言知识与层级信息的中文NER(CNER)模型(CMH)。首先,使用经过多粒度语言知识预训练的模型编码文本,使模型能够同时捕获文...针对当前大多数命名实体识别(NER)模型只使用字符级信息编码且缺乏对文本层次信息提取的问题,提出一种融合多粒度语言知识与层级信息的中文NER(CNER)模型(CMH)。首先,使用经过多粒度语言知识预训练的模型编码文本,使模型能够同时捕获文本的细粒度和粗粒度语言信息,从而更好地表征语料;其次,使用ON-LSTM(Ordered Neurons Long Short-Term Memory network)模型提取层级信息,利用文本本身的层级结构信息增强编码间的时序关系;最后,在模型的解码端结合文本的分词信息,并将实体识别问题转化为表格填充问题,以更好地解决实体重叠问题并获得更准确的实体识别结果。同时,为解决当前模型在不同领域中的迁移能力较差的问题,提出通用实体识别的理念,通过筛选多领域的通用实体类型,构建一套提升模型在多领域中的泛化能力的通用NER数据集MDNER(Multi-Domain NER dataset)。为验证所提模型的效果,在数据集Resume、Weibo、MSRA上进行实验,与MECT(Multi-metadata Embedding based Cross-Transformer)模型相比,F1值分别提高了0.94、4.95和1.58个百分点。为了验证所提模型在多领域中的实体识别效果,在MDNER上进行实验,F1值达到了95.29%。实验结果表明,多粒度语言知识预训练、文本层级结构信息提取和高效指针解码器对模型的性能提升至关重要。展开更多
文摘In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.
文摘In recent years,researchers in handwriting recognition analysis relating to indigenous languages have gained significant internet among research communities.The recent developments of artificial intelligence(AI),natural language processing(NLP),and computational linguistics(CL)find useful in the analysis of regional low resource languages.Automatic lexical task participation might be elaborated to various applications in the NLP.It is apparent from the availability of effective machine recognition models and open access handwritten databases.Arabic language is a commonly spoken Semitic language,and it is written with the cursive Arabic alphabet from right to left.Arabic handwritten Character Recognition(HCR)is a crucial process in optical character recognition.In this view,this paper presents effective Computational linguistics with Deep Learning based Handwriting Recognition and Speech Synthesizer(CLDL-THRSS)for Indigenous Language.The presented CLDL-THRSS model involves two stages of operations namely automated handwriting recognition and speech recognition.Firstly,the automated handwriting recognition procedure involves preprocessing,segmentation,feature extraction,and classification.Also,the Capsule Network(CapsNet)based feature extractor is employed for the recognition of handwritten Arabic characters.For optimal hyperparameter tuning,the cuckoo search(CS)optimization technique was included to tune the parameters of the CapsNet method.Besides,deep neural network with hidden Markov model(DNN-HMM)model is employed for the automatic speech synthesizer.To validate the effective performance of the proposed CLDL-THRSS model,a detailed experimental validation process takes place and investigates the outcomes interms of different measures.The experimental outcomes denoted that the CLDL-THRSS technique has demonstrated the compared methods.
文摘Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy.
基金supported by the Science and Technology Department of Sichuan Province(No.2021YFG0156).
文摘Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.
文摘Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.
基金the National Natural Science Foundation of China (No.60435020, 60575042 and 60503072).
文摘This paper proposes a new way to improve the performance of dependency parser: subdividing verbs according to their grammatical functions and integrating the information of verb subclasses into lexicalized parsing model. Firstly,the scheme of verb subdivision is described. Secondly,a maximum entropy model is presented to distinguish verb subclasses. Finally,a statistical parser is developed to evaluate the verb subdivision. Experimental results indicate that the use of verb subclasses has a good influence on parsing performance.
基金This research was partially supported by:Zhejiang Laboratory(2020AA3AB05)the Fundamental Research Funds for the Provincial Universities of Zhejiang(RF-A2020007).
文摘As a representative technique in natural language processing(NLP),named entity recognition is used in many tasks,such as dialogue systems,machine translation and information extraction.In dialogue systems,there is a common case for named entity recognition,where a lot of entities are composed of numbers,and are segmented to be located in different places.For example,in multiple rounds of dialogue systems,a phone number is likely to be divided into several parts,because the phone number is usually long and is emphasized.In this paper,the entity consisting of numbers is named as number entity.The discontinuous positions of number entities result from many reasons.We find two reasons from real-world dialogue systems.The first reason is the repetitive confirmation of different components of a number entity,and the second reason is the interception of mood words.The extraction of number entities is quite useful in many tasks,such as user information completion and service requests correction.However,the existing entity extraction methods cannot extract entities consisting of discontinuous entity blocks.To address these problems,in this paper,we propose a comprehensive method for number entity recognition,which is capable of extracting number entities in multiple rounds of dialogues systems.We conduct extensive experiments on a real-world dataset,and the experimental results demonstrate the high performance of our method.
基金support by the Special Research Fundation for Young Teachers of Sun Yat-sen University(Grant No.2000-3161101)Humanity and Social Science Youth Foundation of Ministry of Educationof China(Grant No.08JC870013)
文摘Purpose: The purpose of the study is to explore the potential use of nature language process(NLP) and machine learning(ML) techniques and intents to find a feasible strategy and effective approach to fulfill the NER task for Web oriented person-specific information extraction.Design/methodology/approach: An SVM-based multi-classification approach combined with a set of rich NLP features derived from state-of-the-art NLP techniques has been proposed to fulfill the NER task. A group of experiments has been designed to investigate the influence of various NLP-based features to the performance of the system,especially the semantic features. Optimal parameter settings regarding with SVM models,including kernel functions,margin parameter of SVM model and the context window size,have been explored through experiments as well.Findings: The SVM-based multi-classification approach has been proved to be effective for the NER task. This work shows that NLP-based features are of great importance in datadriven NE recognition,particularly the semantic features. The study indicates that higher order kernel function may not be desirable for the specific classification problem in practical application. The simple linear-kernel SVM model performed better in this case. Moreover,the modified SVM models with uneven margin parameter are more common and flexible,which have been proved to solve the imbalanced data problem better.Research limitations/implications: The SVM-based approach for NER problem is only proved to be effective on limited experiment data. Further research need to be conducted on the large batch of real Web data. In addition,the performance of the NER system need be tested when incorporated into a complete IE framework.Originality/value: The specially designed experiments make it feasible to fully explore the characters of the data and obtain the optimal parameter settings for the NER task,leading to a preferable rate in recall,precision and F1measures. The overall system performance(F1value) for all types of name entities can achieve above 88.6%,which can meet the requirements for the practical application.
基金Princess Nourah Bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah Bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR10).
文摘Computational linguistics is an engineering-based scientific discipline.It deals with understanding written and spoken language from a computational viewpoint.Further,the domain also helps construct the artefacts that are useful in processing and producing a language either in bulk or in a dialogue setting.Named Entity Recognition(NER)is a fundamental task in the data extraction process.It concentrates on identifying and labelling the atomic components from several texts grouped under different entities,such as organizations,people,places,and times.Further,the NER mechanism identifies and removes more types of entities as per the requirements.The significance of the NER mechanism has been well-established in Natural Language Processing(NLP)tasks,and various research investigations have been conducted to develop novel NER methods.The conventional ways of managing the tasks range from rule-related and hand-crafted feature-related Machine Learning(ML)techniques to Deep Learning(DL)techniques.In this aspect,the current study introduces a novel Dart Games Optimizer with Hybrid Deep Learning-Driven Computational Linguistics(DGOHDL-CL)model for NER.The presented DGOHDL-CL technique aims to determine and label the atomic components from several texts as a collection of the named entities.In the presented DGOHDL-CL technique,the word embed-ding process is executed at the initial stage with the help of the word2vec model.For the NER mechanism,the Convolutional Gated Recurrent Unit(CGRU)model is employed in this work.At last,the DGO technique is used as a hyperparameter tuning strategy for the CGRU algorithm to boost the NER’s outcomes.No earlier studies integrated the DGO mechanism with the CGRU model for NER.To exhibit the superiority of the proposed DGOHDL-CL technique,a widespread simulation analysis was executed on two datasets,CoNLL-2003 and OntoNotes 5.0.The experimental outcomes establish the promising performance of the DGOHDL-CL technique over other models.
文摘针对当前大多数命名实体识别(NER)模型只使用字符级信息编码且缺乏对文本层次信息提取的问题,提出一种融合多粒度语言知识与层级信息的中文NER(CNER)模型(CMH)。首先,使用经过多粒度语言知识预训练的模型编码文本,使模型能够同时捕获文本的细粒度和粗粒度语言信息,从而更好地表征语料;其次,使用ON-LSTM(Ordered Neurons Long Short-Term Memory network)模型提取层级信息,利用文本本身的层级结构信息增强编码间的时序关系;最后,在模型的解码端结合文本的分词信息,并将实体识别问题转化为表格填充问题,以更好地解决实体重叠问题并获得更准确的实体识别结果。同时,为解决当前模型在不同领域中的迁移能力较差的问题,提出通用实体识别的理念,通过筛选多领域的通用实体类型,构建一套提升模型在多领域中的泛化能力的通用NER数据集MDNER(Multi-Domain NER dataset)。为验证所提模型的效果,在数据集Resume、Weibo、MSRA上进行实验,与MECT(Multi-metadata Embedding based Cross-Transformer)模型相比,F1值分别提高了0.94、4.95和1.58个百分点。为了验证所提模型在多领域中的实体识别效果,在MDNER上进行实验,F1值达到了95.29%。实验结果表明,多粒度语言知识预训练、文本层级结构信息提取和高效指针解码器对模型的性能提升至关重要。