Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ...Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.展开更多
Two learning models,Zolu-continuous bags of words(ZL-CBOW)and Zolu-skip-grams(ZL-SG),based on the Zolu function are proposed.The slope of Relu in word2vec has been changed by the Zolu function.The proposed models can ...Two learning models,Zolu-continuous bags of words(ZL-CBOW)and Zolu-skip-grams(ZL-SG),based on the Zolu function are proposed.The slope of Relu in word2vec has been changed by the Zolu function.The proposed models can process extremely large data sets as well as word2vec without increasing the complexity.Also,the models outperform several word embedding methods both in word similarity and syntactic accuracy.The method of ZL-CBOW outperforms CBOW in accuracy by 8.43%on the training set of capital-world,and by 1.24%on the training set of plural-verbs.Moreover,experimental simulations on word similarity and syntactic accuracy show that ZL-CBOW and ZL-SG are superior to LL-CBOW and LL-SG,respectively.展开更多
In recent years,e-sports has rapidly developed,and the industry has produced large amounts of data with specifications,and these data are easily to be obtained.Due to the above characteristics,data mining and deep lea...In recent years,e-sports has rapidly developed,and the industry has produced large amounts of data with specifications,and these data are easily to be obtained.Due to the above characteristics,data mining and deep learning methods can be used to guide players and develop appropriate strategies to win games.As one of the world’s most famous e-sports events,Dota2 has a large audience base and a good game system.A victory in a game is often associated with a hero’s match,and players are often unable to pick the best lineup to compete.To solve this problem,in this paper,we present an improved bidirectional Long Short-Term Memory(LSTM)neural network model for Dota2 lineup recommendations.The model uses the Continuous Bag Of Words(CBOW)model in the Word2 vec model to generate hero vectors.The CBOW model can predict the context of a word in a sentence.Accordingly,a word is transformed into a hero,a sentence into a lineup,and a word vector into a hero vector,the model applied in this article recommends the last hero according to the first four heroes selected first,thereby solving a series of recommendation problems.展开更多
基金funded by Scientific Research Deanship at University of Ha’il-Saudi Arabia through Project Number RG-23092。
文摘Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
基金Supported by the National Natural Science Foundation of China(61771051,61675025)。
文摘Two learning models,Zolu-continuous bags of words(ZL-CBOW)and Zolu-skip-grams(ZL-SG),based on the Zolu function are proposed.The slope of Relu in word2vec has been changed by the Zolu function.The proposed models can process extremely large data sets as well as word2vec without increasing the complexity.Also,the models outperform several word embedding methods both in word similarity and syntactic accuracy.The method of ZL-CBOW outperforms CBOW in accuracy by 8.43%on the training set of capital-world,and by 1.24%on the training set of plural-verbs.Moreover,experimental simulations on word similarity and syntactic accuracy show that ZL-CBOW and ZL-SG are superior to LL-CBOW and LL-SG,respectively.
基金the Guangdong Province Key Research and Development Plan(No.2019B010137004)the National Natural Science Foundation of China(Nos.61402149 and 61871140)+3 种基金the Scientific and Technological Project of Henan Province(Nos.182102110065,182102210238,and 202102310340)the Natural Science Foundation of Henan Educational Committee(No.17B520006)Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme(2019)Foundation of University Young Key Teacher of Henan Province(No.2019GGJS040)。
文摘In recent years,e-sports has rapidly developed,and the industry has produced large amounts of data with specifications,and these data are easily to be obtained.Due to the above characteristics,data mining and deep learning methods can be used to guide players and develop appropriate strategies to win games.As one of the world’s most famous e-sports events,Dota2 has a large audience base and a good game system.A victory in a game is often associated with a hero’s match,and players are often unable to pick the best lineup to compete.To solve this problem,in this paper,we present an improved bidirectional Long Short-Term Memory(LSTM)neural network model for Dota2 lineup recommendations.The model uses the Continuous Bag Of Words(CBOW)model in the Word2 vec model to generate hero vectors.The CBOW model can predict the context of a word in a sentence.Accordingly,a word is transformed into a hero,a sentence into a lineup,and a word vector into a hero vector,the model applied in this article recommends the last hero according to the first four heroes selected first,thereby solving a series of recommendation problems.