Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
Purpose:To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers,and to provide an evidencebased tool for evaluating the academic...Purpose:To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers,and to provide an evidencebased tool for evaluating the academic value of cited papers.Design/methodology/approach:CiteOpinion uses a deep learning model to automatically extract citing sentences from representative citing papers;it starts with an analysis on the citing sentences,then it identifies major academic contribution points of the cited paper,positive/negative evaluations from citing authors and the changes in the subjects of subsequent citing authors by means of Recognizing Categories of Moves(problems,methods,conclusions,etc.),and sentiment analysis and topic clustering.Findings:Citing sentences in a citing paper contain substantial evidences useful for academic evaluation.They can also be used to objectively and authentically reveal the nature and degree of contribution of the cited paper reflected by citation,beyond simple citation statistics.Practical implications:The evidence-based evaluation tool CiteOpinion can provide an objective and in-depth academic value evaluation basis for the representative papers of scientific researchers,research teams,and institutions.Originality/value:No other similar practical tool is found in papers retrieved.Research limitations:There are difficulties in acquiring full text of citing papers.There is a need to refine the calculation based on the sentiment scores of citing sentences.Currently,the tool is only used for academic contribution evaluation,while its value in policy studies,technical application,and promotion of science is not yet tested.展开更多
We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuab...We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.展开更多
Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive process...Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive processing of noun predicate sentences.Therefore,under the framework of cognitive grammar,this paper tries to analyze the semantic connection and cognitive process in noun predicate sentences from the semantic perspective and the method of example theory,and discusses the motivation of the formation of this construction,so as to provide references for in-depth analysis of the cognitive laws behind noun predicate sentences.展开更多
Correction to:Nano-Micro Lett.(2023)15:223 https://doi.org/10.1007/s40820-023-01189-0 In this article the author’s name“Hao-Chung Kuo”was incorrectly written as“Hao-Chung Guo”.And in the last sentence of the firs...Correction to:Nano-Micro Lett.(2023)15:223 https://doi.org/10.1007/s40820-023-01189-0 In this article the author’s name“Hao-Chung Kuo”was incorrectly written as“Hao-Chung Guo”.And in the last sentence of the first paragraph of Introduction,the text‘(20-20)’should have read‘(20-21)’.The original article has been corrected.展开更多
This paper intends to analyze the six types of English imperative sentences proposed by Chen (1984) from a perspective of causal-chain windowing. It comes to the conclusions that Talmy's causal-chain windowing app...This paper intends to analyze the six types of English imperative sentences proposed by Chen (1984) from a perspective of causal-chain windowing. It comes to the conclusions that Talmy's causal-chain windowing approach as well as the cognitive underpinnings of causal windowing and gapping is proved to be applicable in English imperative structures, and that generally speaking, the final portion of an imperative sentence is always windowed while the intermediate portions gapped.展开更多
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘Purpose:To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers,and to provide an evidencebased tool for evaluating the academic value of cited papers.Design/methodology/approach:CiteOpinion uses a deep learning model to automatically extract citing sentences from representative citing papers;it starts with an analysis on the citing sentences,then it identifies major academic contribution points of the cited paper,positive/negative evaluations from citing authors and the changes in the subjects of subsequent citing authors by means of Recognizing Categories of Moves(problems,methods,conclusions,etc.),and sentiment analysis and topic clustering.Findings:Citing sentences in a citing paper contain substantial evidences useful for academic evaluation.They can also be used to objectively and authentically reveal the nature and degree of contribution of the cited paper reflected by citation,beyond simple citation statistics.Practical implications:The evidence-based evaluation tool CiteOpinion can provide an objective and in-depth academic value evaluation basis for the representative papers of scientific researchers,research teams,and institutions.Originality/value:No other similar practical tool is found in papers retrieved.Research limitations:There are difficulties in acquiring full text of citing papers.There is a need to refine the calculation based on the sentiment scores of citing sentences.Currently,the tool is only used for academic contribution evaluation,while its value in policy studies,technical application,and promotion of science is not yet tested.
文摘We use a lot of devices in our daily life to communicate with others. In this modern world, people use email, Facebook, Twitter, and many other social network sites for exchanging information. People lose their valuable time misspelling and retyping, and some people are not happy to type large sentences because they face unnecessary words or grammatical issues. So, for this reason, word predictive systems help to exchange textual information more quickly, easier, and comfortably for all people. These systems predict the next most probable words and give users to choose of the needed word from these suggested words. Word prediction can help the writer by predicting the next word and helping complete the sentence correctly. This research aims to forecast the most suitable next word to complete a sentence for any given context. In this research, we have worked on the Bangla language. We have presented a process that can expect the next maximum probable and proper words and suggest a complete sentence using predicted words. In this research, GRU-based RNN has been used on the N-gram dataset to develop the proposed model. We collected a large dataset using multiple sources in the Bangla language and also compared it to the other approaches that have been used such as LSTM, and Naive Bayes. But this suggested approach provides excellent exactness than others. Here, the Unigram model provides 88.22%, Bi-gram model is 99.24%, Tri-gram model is 97.69%, and 4-gram and 5-gram models provide 99.43% and 99.78% on average accurateness. We think that our proposed method profound impression on Bangla search engines.
文摘Cognitive grammar,as a linguistic theory that attaches importance to the relationship between language and thinking,provides us with a more comprehensive way to understand the structure,semantics and cognitive processing of noun predicate sentences.Therefore,under the framework of cognitive grammar,this paper tries to analyze the semantic connection and cognitive process in noun predicate sentences from the semantic perspective and the method of example theory,and discusses the motivation of the formation of this construction,so as to provide references for in-depth analysis of the cognitive laws behind noun predicate sentences.
文摘Correction to:Nano-Micro Lett.(2023)15:223 https://doi.org/10.1007/s40820-023-01189-0 In this article the author’s name“Hao-Chung Kuo”was incorrectly written as“Hao-Chung Guo”.And in the last sentence of the first paragraph of Introduction,the text‘(20-20)’should have read‘(20-21)’.The original article has been corrected.
文摘This paper intends to analyze the six types of English imperative sentences proposed by Chen (1984) from a perspective of causal-chain windowing. It comes to the conclusions that Talmy's causal-chain windowing approach as well as the cognitive underpinnings of causal windowing and gapping is proved to be applicable in English imperative structures, and that generally speaking, the final portion of an imperative sentence is always windowed while the intermediate portions gapped.