In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and c...Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and categorize them into predefined entity types.This process can provide basic support for the automatic construction of knowledge bases.In contrast to general texts,biomedical texts frequently contain numerous nested entities and local dependencies among these entities,presenting significant challenges to prevailing NER models.To address these issues,we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer(RoBGP).Our model initially utilizes the RoBERTa-wwm-ext-large pretrained language model to dynamically generate word-level initial vectors.It then incorporates a Bidirectional Long Short-Term Memory network for capturing bidirectional semantic information,effectively addressing the issue of long-distance dependencies.Furthermore,the Global Pointer model is employed to comprehensively recognize all nested entities in the text.We conduct extensive experiments on the Chinese medical dataset CMeEE and the results demonstrate the superior performance of RoBGP over several baseline models.This research confirms the effectiveness of RoBGP in Chinese biomedical NER,providing reliable technical support for biomedical information extraction and knowledge base construction.展开更多
It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languag...It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languages where scripted speech is difficult to obtain, data sparsity is the main problem that limits the performance of speech recognition system. In this paper, several knowledge transfer methods are investigated to overcome the data sparsity problem with the help of high-resource languages.The first one is a pre-training and fine-tuning(PT/FT) method, in which the parameters of hidden layers are initialized with a welltrained neural network. Secondly, the progressive neural networks(Prognets) are investigated. With the help of lateral connections in the network architecture, Prognets are immune to forgetting effect and superior in knowledge transferring. Finally,bottleneck features(BNF) are extracted using cross-lingual deep neural networks and serves as an enhanced feature to improve the performance of ASR system. Experiments are conducted in a low-resource Vietnamese dataset. The results show that all three methods yield significant gains over the baseline system, and the Prognets acoustic model performs the best. Further improvements can be obtained by combining the Prognets model and bottleneck features.展开更多
针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comp...针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comprehension)。该模型引入BERT(Bidirectional encoder representation from transformers)预训练模型,通过冻结BERT模型原有参数,保留其在预训练阶段获取到的文本表征能力;为了增强模型对领域数据的适用性,在每层Transformer中插入连续可训练提示向量;为提高嵌套命名实体识别的准确性,采用指针网络抽取实体序列。在自建农业病害数据集上开展了对比实验,该数据集包含2933条文本语料,8个实体类型,共10414个实体。实验结果显示,CP-MRC模型的精确率、召回率、F1值达到83.55%、81.4%、82.4%,优于其他模型;在病原、作物两类嵌套实体的识别率较其他模型F1值提升3个百分点和13个百分点,嵌套实体识别率明显提升。本文提出的模型仅采用少量可训练参数仍然具备良好识别性能,为较大规模预训练模型在信息抽取任务上的应用提供了思路。展开更多
从新闻报道中识别企业的风险可以快速定位企业所涉及的风险类别,从而帮助企业及时地做出应对措施。一般而言,新闻舆情风险识别是一种风险标签的多分类任务。以BERT为代表的深度学习方法采用预训练+微调的模式在文本分类任务当中表现突...从新闻报道中识别企业的风险可以快速定位企业所涉及的风险类别,从而帮助企业及时地做出应对措施。一般而言,新闻舆情风险识别是一种风险标签的多分类任务。以BERT为代表的深度学习方法采用预训练+微调的模式在文本分类任务当中表现突出。然而新闻舆情领域标记数据偏少,构成了小样本的机器学习问题。以提示学习为代表的新范式为小样本分类性能的提升提供了一种新的途径和手段,现有的研究表明该范式在很多任务上优于预训练+微调的方式。受现有研究工作的启发,提出了基于提示学习的新闻舆情风险识别方法,在BERT预训练模型基础之上根据提示学习的思想设计新闻舆情风险提示模板,通过MLM(masked language model)模型训练之后,将预测出来的标签通过答案工程映射到已有的风险标签。实验结果表明在新闻舆情数据集的不同数量小样本上,提示学习的训练方法均优于微调的训练方法。展开更多
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
基金supported by the Outstanding Youth Team Project of Central Universities(QNTD202308)the Ant Group through CCF-Ant Research Fund(CCF-AFSG 769498 RF20220214).
文摘Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and categorize them into predefined entity types.This process can provide basic support for the automatic construction of knowledge bases.In contrast to general texts,biomedical texts frequently contain numerous nested entities and local dependencies among these entities,presenting significant challenges to prevailing NER models.To address these issues,we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer(RoBGP).Our model initially utilizes the RoBERTa-wwm-ext-large pretrained language model to dynamically generate word-level initial vectors.It then incorporates a Bidirectional Long Short-Term Memory network for capturing bidirectional semantic information,effectively addressing the issue of long-distance dependencies.Furthermore,the Global Pointer model is employed to comprehensively recognize all nested entities in the text.We conduct extensive experiments on the Chinese medical dataset CMeEE and the results demonstrate the superior performance of RoBGP over several baseline models.This research confirms the effectiveness of RoBGP in Chinese biomedical NER,providing reliable technical support for biomedical information extraction and knowledge base construction.
基金partially supported by the National Natural Science Foundation of China(11590770-4,U1536117)the National Key Research and Development Program of China(2016YFB0801203,2016YFB0801200)+1 种基金the Key Science and Technology Project of the Xinjiang Uygur Autonomous Region(2016A03007-1)the Pre-research Project for Equipment of General Information System(JZX2017-0994/Y306)
文摘It is well known that automatic speech recognition(ASR) is a resource consuming task. It takes sufficient amount of data to train a state-of-the-art deep neural network acoustic model. As for some low-resource languages where scripted speech is difficult to obtain, data sparsity is the main problem that limits the performance of speech recognition system. In this paper, several knowledge transfer methods are investigated to overcome the data sparsity problem with the help of high-resource languages.The first one is a pre-training and fine-tuning(PT/FT) method, in which the parameters of hidden layers are initialized with a welltrained neural network. Secondly, the progressive neural networks(Prognets) are investigated. With the help of lateral connections in the network architecture, Prognets are immune to forgetting effect and superior in knowledge transferring. Finally,bottleneck features(BNF) are extracted using cross-lingual deep neural networks and serves as an enhanced feature to improve the performance of ASR system. Experiments are conducted in a low-resource Vietnamese dataset. The results show that all three methods yield significant gains over the baseline system, and the Prognets acoustic model performs the best. Further improvements can be obtained by combining the Prognets model and bottleneck features.
文摘针对农业病害领域命名实体识别过程中存在的预训练语言模型利用不充分、外部知识注入利用率低、嵌套命名实体识别率低的问题,本文提出基于连续提示注入和指针网络的命名实体识别模型CP-MRC(Continuous prompts for machine reading comprehension)。该模型引入BERT(Bidirectional encoder representation from transformers)预训练模型,通过冻结BERT模型原有参数,保留其在预训练阶段获取到的文本表征能力;为了增强模型对领域数据的适用性,在每层Transformer中插入连续可训练提示向量;为提高嵌套命名实体识别的准确性,采用指针网络抽取实体序列。在自建农业病害数据集上开展了对比实验,该数据集包含2933条文本语料,8个实体类型,共10414个实体。实验结果显示,CP-MRC模型的精确率、召回率、F1值达到83.55%、81.4%、82.4%,优于其他模型;在病原、作物两类嵌套实体的识别率较其他模型F1值提升3个百分点和13个百分点,嵌套实体识别率明显提升。本文提出的模型仅采用少量可训练参数仍然具备良好识别性能,为较大规模预训练模型在信息抽取任务上的应用提供了思路。
文摘从新闻报道中识别企业的风险可以快速定位企业所涉及的风险类别,从而帮助企业及时地做出应对措施。一般而言,新闻舆情风险识别是一种风险标签的多分类任务。以BERT为代表的深度学习方法采用预训练+微调的模式在文本分类任务当中表现突出。然而新闻舆情领域标记数据偏少,构成了小样本的机器学习问题。以提示学习为代表的新范式为小样本分类性能的提升提供了一种新的途径和手段,现有的研究表明该范式在很多任务上优于预训练+微调的方式。受现有研究工作的启发,提出了基于提示学习的新闻舆情风险识别方法,在BERT预训练模型基础之上根据提示学习的思想设计新闻舆情风险提示模板,通过MLM(masked language model)模型训练之后,将预测出来的标签通过答案工程映射到已有的风险标签。实验结果表明在新闻舆情数据集的不同数量小样本上,提示学习的训练方法均优于微调的训练方法。