Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
为充分挖掘专利文本中已有的解决方案和技术知识,依据发明问题解决理论(theory of inventive problem solving,TRIZ),提出了一种基于预训练语言模型的方法,将其用于面向TRIZ发明原理的中文专利分类研究中。基于整词掩码技术,使用不同数...为充分挖掘专利文本中已有的解决方案和技术知识,依据发明问题解决理论(theory of inventive problem solving,TRIZ),提出了一种基于预训练语言模型的方法,将其用于面向TRIZ发明原理的中文专利分类研究中。基于整词掩码技术,使用不同数量的专利数据集(标题和摘要)对中文RoBERTa模型进一步预训练,生成特定于专利领域的RoBERTa_patent1.0和RoBERTa_patent2.0两个模型,并在此基础上添加全连接层,构建了基于RoBERTa、RoBERTa_patent1.0和RoBERTa_patent2.0的三个专利分类模型。然后使用构建的基于TRIZ发明原理的专利数据集对以上三个分类模型进行训练和测试。实验结果表明,RoBERTa_patent2.0_IP具有更高的准确率、宏查准率、宏查全率和宏F 1值,分别达到96%、95.69%、94%和94.84%,实现了基于TRIZ发明原理的中文专利文本自动分类,可以帮助设计者理解与应用TRIZ发明原理,实现产品的创新设计。展开更多
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘为充分挖掘专利文本中已有的解决方案和技术知识,依据发明问题解决理论(theory of inventive problem solving,TRIZ),提出了一种基于预训练语言模型的方法,将其用于面向TRIZ发明原理的中文专利分类研究中。基于整词掩码技术,使用不同数量的专利数据集(标题和摘要)对中文RoBERTa模型进一步预训练,生成特定于专利领域的RoBERTa_patent1.0和RoBERTa_patent2.0两个模型,并在此基础上添加全连接层,构建了基于RoBERTa、RoBERTa_patent1.0和RoBERTa_patent2.0的三个专利分类模型。然后使用构建的基于TRIZ发明原理的专利数据集对以上三个分类模型进行训练和测试。实验结果表明,RoBERTa_patent2.0_IP具有更高的准确率、宏查准率、宏查全率和宏F 1值,分别达到96%、95.69%、94%和94.84%,实现了基于TRIZ发明原理的中文专利文本自动分类,可以帮助设计者理解与应用TRIZ发明原理,实现产品的创新设计。