In order to convey complete meanings,there is a phenomenon in Chinese of using multiple running sentences.Xu Jingning(2023,p.66)states,“In communication,a complete expression of meaning often requires more than one c...In order to convey complete meanings,there is a phenomenon in Chinese of using multiple running sentences.Xu Jingning(2023,p.66)states,“In communication,a complete expression of meaning often requires more than one clause,which is common in human languages.”Domestic research on running sentences includes discussions on defining the concept and structural features of running sentences,sentence properties,sentence pattern classifications and their criteria,as well as issues related to translating running sentences into English.This article primarily focuses on scholarly research into the English translation of running sentences in China,highlighting recent achievements and identifying existing issues in the study of running sentence translation.However,by reviewing literature on the translation of running sentences,it is found that current research in the academic community on non-core running sentences is limited.Therefore,this paper proposes relevant strategies to address this issue.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuab...We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.展开更多
[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以...[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以图书情报领域中2011—2022年Web of Science数据库中的机器学习研究论文为例,融合LDA和Word2vec方法进行主题建模和主题演化分析,引入主题强度、主题影响力、主题关注度与主题新颖性指标识别热点主题与新兴热点主题。[结果/结论]研究结果表明,(1)Word2vec语义处理能力与LDA主题演化能力的结合能够更加准确地识别研究主题,直观展示研究主题的分阶段演化规律;(2)图书情报领域的机器学习研究主题主要分为自然语言处理与文本分析、数据挖掘与分析、信息与知识服务三大类范畴。各类主题之间的关联性较强,且具有主题关联演化特征;(3)设计的主题强度、主题影响力和主题关注度指标及综合指标能够较好地识别出2011—2014年、2015—2018年和2019—2022年3个不同周期阶段的热点主题。展开更多
Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive process...Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive processing of noun predicate sentences.Therefore,under the framework of cognitive grammar,this paper tries to analyze the semantic connection and cognitive process in noun predicate sentences from the semantic perspective and the method of example theory,and discusses the motivation of the formation of this construction,so as to provide references for in-depth analysis of the cognitive laws behind noun predicate sentences.展开更多
安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采...安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采用Word2vec模型针对事件文本语料进行词向量训练,缩小空间向量维度;然后通过BiLSTM模型自动提取特征,获取事件文本的完整序列信息和上下文特征向量;最后采用softmax函数对民航非计划事件进行分类。实验结果表明,所提出的方法分类效果更好,能达到更优的准确率和F 1值,对不平衡数据样本同样具有较稳定的分类性能,证明了该方法在民航非计划事件分析上的适用性和有效性。展开更多
微博作为当今热门的社交平台,其中蕴含着许多具有强烈主观性的用户评论文本。为挖掘微博评论文本中潜在的信息,针对传统的情感分析模型中存在的语义缺失以及过度依赖人工标注等问题,提出一种基于LSTM+Word2vec的深度学习情感分析模型。...微博作为当今热门的社交平台,其中蕴含着许多具有强烈主观性的用户评论文本。为挖掘微博评论文本中潜在的信息,针对传统的情感分析模型中存在的语义缺失以及过度依赖人工标注等问题,提出一种基于LSTM+Word2vec的深度学习情感分析模型。采用Word2vec中的连续词袋模型(continuous bag of words,CBOW),利用语境的上下文结构及语义关系将每个词语映射为向量空间,增强词向量之间的稠密度;采用长短时记忆神经网络模型实现对文本上下文序列的线性抓取,最后输出分类预测的结果。实验结果的准确率可达95.9%,通过对照实验得到情感词典、RNN、SVM三种模型的准确率分别为52.3%、92.7%、85.7%,对比发现基于LSTM+Word2vec的深度学习情感分析模型的准确率更高,具有一定的鲁棒性和泛化性,对用户个性化推送和网络舆情监控具有重要意义。展开更多
文摘In order to convey complete meanings,there is a phenomenon in Chinese of using multiple running sentences.Xu Jingning(2023,p.66)states,“In communication,a complete expression of meaning often requires more than one clause,which is common in human languages.”Domestic research on running sentences includes discussions on defining the concept and structural features of running sentences,sentence properties,sentence pattern classifications and their criteria,as well as issues related to translating running sentences into English.This article primarily focuses on scholarly research into the English translation of running sentences in China,highlighting recent achievements and identifying existing issues in the study of running sentence translation.However,by reviewing literature on the translation of running sentences,it is found that current research in the academic community on non-core running sentences is limited.Therefore,this paper proposes relevant strategies to address this issue.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.
文摘[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以图书情报领域中2011—2022年Web of Science数据库中的机器学习研究论文为例,融合LDA和Word2vec方法进行主题建模和主题演化分析,引入主题强度、主题影响力、主题关注度与主题新颖性指标识别热点主题与新兴热点主题。[结果/结论]研究结果表明,(1)Word2vec语义处理能力与LDA主题演化能力的结合能够更加准确地识别研究主题,直观展示研究主题的分阶段演化规律;(2)图书情报领域的机器学习研究主题主要分为自然语言处理与文本分析、数据挖掘与分析、信息与知识服务三大类范畴。各类主题之间的关联性较强,且具有主题关联演化特征;(3)设计的主题强度、主题影响力和主题关注度指标及综合指标能够较好地识别出2011—2014年、2015—2018年和2019—2022年3个不同周期阶段的热点主题。
文摘Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive processing of noun predicate sentences.Therefore,under the framework of cognitive grammar,this paper tries to analyze the semantic connection and cognitive process in noun predicate sentences from the semantic perspective and the method of example theory,and discusses the motivation of the formation of this construction,so as to provide references for in-depth analysis of the cognitive laws behind noun predicate sentences.
文摘安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采用Word2vec模型针对事件文本语料进行词向量训练,缩小空间向量维度;然后通过BiLSTM模型自动提取特征,获取事件文本的完整序列信息和上下文特征向量;最后采用softmax函数对民航非计划事件进行分类。实验结果表明,所提出的方法分类效果更好,能达到更优的准确率和F 1值,对不平衡数据样本同样具有较稳定的分类性能,证明了该方法在民航非计划事件分析上的适用性和有效性。
文摘微博作为当今热门的社交平台,其中蕴含着许多具有强烈主观性的用户评论文本。为挖掘微博评论文本中潜在的信息,针对传统的情感分析模型中存在的语义缺失以及过度依赖人工标注等问题,提出一种基于LSTM+Word2vec的深度学习情感分析模型。采用Word2vec中的连续词袋模型(continuous bag of words,CBOW),利用语境的上下文结构及语义关系将每个词语映射为向量空间,增强词向量之间的稠密度;采用长短时记忆神经网络模型实现对文本上下文序列的线性抓取,最后输出分类预测的结果。实验结果的准确率可达95.9%,通过对照实验得到情感词典、RNN、SVM三种模型的准确率分别为52.3%、92.7%、85.7%,对比发现基于LSTM+Word2vec的深度学习情感分析模型的准确率更高,具有一定的鲁棒性和泛化性,对用户个性化推送和网络舆情监控具有重要意义。