Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The diffic...Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The difficulties in differentiating between Arabic dialects have garnered more attention in the last 10 years,particularly in social media.These difficulties result from the overlapping vocabulary of the dialects,the fluidity of online language use,and the difficulties in telling apart dialects that are closely related.Managing dialects with limited resources and adjusting to the ever-changing linguistic trends on social media platforms present additional challenges.A strong dialect recognition technique is essential to improving communication technology and cross-cultural understanding in light of the increase in social media usage.To distinguish Arabic dialects on social media,this research suggests a hybrid Deep Learning(DL)approach.The Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM)architectures make up the model.A new textual dataset that focuses on three main dialects,i.e.,Levantine,Saudi,and Egyptian,is also available.Approximately 11,000 user-generated comments from Twitter are included in this dataset,which has been painstakingly annotated to guarantee accuracy in dialect classification.Transformers,DL models,and basic machine learning classifiers are used to conduct several tests to evaluate the performance of the suggested model.Various methodologies,including TF-IDF,word embedding,and self-attention mechanisms,are used.The suggested model fares better than other models in terms of accuracy,obtaining a remarkable 96.54%,according to the trial results.This study advances the discipline by presenting a new dataset and putting forth a practical model for Arabic dialect identification.This model may prove crucial for future work in sociolinguistic studies and NLP.展开更多
Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform mo...Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.展开更多
In Peking University Computer Research Institute (PUCRI) a method of inputting Chinese sentences based on words has been developed. To reduce the troubles in choosing one word out of the others characterized by the sa...In Peking University Computer Research Institute (PUCRI) a method of inputting Chinese sentences based on words has been developed. To reduce the troubles in choosing one word out of the others characterized by the same feature, grammatical parsing technique is applied to the method and good results have been achieved. This article describes the outline of the method, the principle of applying grammatical formulas and the branch-cutting algorithm used to speed up the grammatical parsing.展开更多
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The difficulties in differentiating between Arabic dialects have garnered more attention in the last 10 years,particularly in social media.These difficulties result from the overlapping vocabulary of the dialects,the fluidity of online language use,and the difficulties in telling apart dialects that are closely related.Managing dialects with limited resources and adjusting to the ever-changing linguistic trends on social media platforms present additional challenges.A strong dialect recognition technique is essential to improving communication technology and cross-cultural understanding in light of the increase in social media usage.To distinguish Arabic dialects on social media,this research suggests a hybrid Deep Learning(DL)approach.The Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM)architectures make up the model.A new textual dataset that focuses on three main dialects,i.e.,Levantine,Saudi,and Egyptian,is also available.Approximately 11,000 user-generated comments from Twitter are included in this dataset,which has been painstakingly annotated to guarantee accuracy in dialect classification.Transformers,DL models,and basic machine learning classifiers are used to conduct several tests to evaluate the performance of the suggested model.Various methodologies,including TF-IDF,word embedding,and self-attention mechanisms,are used.The suggested model fares better than other models in terms of accuracy,obtaining a remarkable 96.54%,according to the trial results.This study advances the discipline by presenting a new dataset and putting forth a practical model for Arabic dialect identification.This model may prove crucial for future work in sociolinguistic studies and NLP.
基金国家高技术研究发展计划(863计划),the National Natural Science Foundation of China
文摘Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.
文摘In Peking University Computer Research Institute (PUCRI) a method of inputting Chinese sentences based on words has been developed. To reduce the troubles in choosing one word out of the others characterized by the same feature, grammatical parsing technique is applied to the method and good results have been achieved. This article describes the outline of the method, the principle of applying grammatical formulas and the branch-cutting algorithm used to speed up the grammatical parsing.