针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transf...针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transformers,BERT)与生成对抗网络(generative adversarial network,GAN)的陆空通话意图信息挖掘方法,并引入航班池信息对提取的部分信息进行校验修正,形成空中交通管制(air traffic control,ATC)系统可理解的结构化信息。首先,使用改进的GAN模型进行陆空通话智能文本生成,可有效进行数据增强,平衡各类实体信息分布并扩充数据集;然后,根据欧洲单一天空空中交通管理项目定义的本体规则进行意图的分类与标注;之后,通过BERT预训练模型生成字向量并解决一词多义问题,利用双向长短时记忆(bidirectional long short-term memory,BiLSTM)网络双向编码提取上下句语义特征,同时将该语义特征送入条件随机场(conditional random field,CRF)模型进行推理预测,学习标签的依赖关系并加以约束,以获取全局最优结果;最后,根据编辑距离(edit distance,ED)算法进行意图信息合理性校验与修正。对比实验结果表明,所提方法的宏平均F_(1)值达到了98.75%,在民航陆空通话数据集上的意图挖掘性能优于其他主流模型,为其加入数字化进程奠定了基础。展开更多
中文电子病历命名实体识别主要是研究电子病历病程记录文书数据集,文章提出对医疗手术麻醉文书数据集进行命名实体识别的研究。利用轻量级来自Transformer的双向编码器表示(A Lite Bidirectional Encoder Representation from Transform...中文电子病历命名实体识别主要是研究电子病历病程记录文书数据集,文章提出对医疗手术麻醉文书数据集进行命名实体识别的研究。利用轻量级来自Transformer的双向编码器表示(A Lite Bidirectional Encoder Representation from Transformers,ALBERT)预训练模型微调数据集和Tranfomers中的trainer训练器训练模型的方法,实现在医疗手术麻醉文书上识别手术麻醉事件命名实体与获取复杂麻醉医疗质量控制指标值。文章为医疗手术麻醉文书命名实体识别提供了可借鉴的思路,并且为计算复杂麻醉医疗质量控制指标值提供了一种新的解决方案。展开更多
首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对...首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.展开更多
针对油气领域知识图谱构建过程中命名实体识别使用传统方法存在实体特征信息提取不准确、识别效率低的问题,提出了一种基于BERT-BiLSTM-CRF模型的命名实体识别研究方法。该方法首先利用BERT(bidirectional encoder representations from...针对油气领域知识图谱构建过程中命名实体识别使用传统方法存在实体特征信息提取不准确、识别效率低的问题,提出了一种基于BERT-BiLSTM-CRF模型的命名实体识别研究方法。该方法首先利用BERT(bidirectional encoder representations from transformers)预训练模型得到输入序列语义的词向量;然后将训练后的词向量输入双向长短期记忆网络(bi-directional long short-term memory,BiLSTM)模型进一步获取上下文特征;最后根据条件随机场(conditional random fields,CRF)的标注规则和序列解码能力输出最大概率序列标注结果,构建油气领域命名实体识别模型框架。将BERT-BiLSTM-CRF模型与其他2种命名实体识别模型(BiLSTM-CRF、BiLSTM-Attention-CRF)在包括3万多条文本语料数据、4类实体的自建数据集上进行了对比实验。实验结果表明,BERT-BiLSTM-CRF模型的准确率(P)、召回率(R)和F_(1)值分别达到91.3%、94.5%和92.9%,实体识别效果优于其他2种模型。展开更多
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ...Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.展开更多
针对传统情感分析任务中,使用Word2Vec(word to vector)模型生成文本词向量无法有效解决多义词表征,经典神经网络模型无法充分提取语义特征的问题,文中提出了基于BERT(bidirectional encoder representations from transformers)和双向...针对传统情感分析任务中,使用Word2Vec(word to vector)模型生成文本词向量无法有效解决多义词表征,经典神经网络模型无法充分提取语义特征的问题,文中提出了基于BERT(bidirectional encoder representations from transformers)和双向门控循环单元(bidirectional gated recurrent unit,BGRU)以及注意力机制(attention mechanism,Att)的BERT-BGRU-Att中文文本情感分析模型。首先通过BERT预训练模型将中文文本转变为矩阵向量的表示形式,然后建立融合注意力机制的BGRU神经网络对文本信息进行特征提取,最后将信息特征按照不同的权重输入到Softmax分类器进行预测。在网络评论数据集上进行实验,结果显示该模型的预测效果优于相关网络模型,表明该模型在中文文本情感分析任务中的有效性。展开更多
Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of m...Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.展开更多
针对双向长短时记忆网络-条件随机场(bi-directional long short-term memory-conditional random field,BiLSTM-CRF)模型存在准确率低和向量无法表示上下文的问题,提出一种改进的中文命名实体识别模型。利用裁剪的双向编码器表征模型(b...针对双向长短时记忆网络-条件随机场(bi-directional long short-term memory-conditional random field,BiLSTM-CRF)模型存在准确率低和向量无法表示上下文的问题,提出一种改进的中文命名实体识别模型。利用裁剪的双向编码器表征模型(bidirectional encoder representations from transformers,BERT)得到包含上下文信息的语义向量;输入双向门控循环单元(bidirectional gated recurrent unit,BiGRU)网络及多头自注意力层捕获序列的全局和局部特征;通过条件随机场(conditional random field,CRF)层进行序列解码标注,提取出命名实体。在人民日报和微软亚洲研究院(Microsoft research Asia,MSRA)数据集上的实验结果表明,改进模型在识别效果和速度方面都有一定提高;对BERT模型内在机理的分析表明,BERT模型主要依赖从低层和中层学习到的短语及语法信息完成命名实体识别(named entity recognition,NER)任务。展开更多
现有的基于BERT(bidirectional encoder representations from transformers)的方面级情感分析模型仅使用BERT最后一层隐藏层的输出,忽略BERT中间隐藏层的语义信息,存在信息利用不充分的问题,提出一种融合BERT中间隐藏层的方面级情感分...现有的基于BERT(bidirectional encoder representations from transformers)的方面级情感分析模型仅使用BERT最后一层隐藏层的输出,忽略BERT中间隐藏层的语义信息,存在信息利用不充分的问题,提出一种融合BERT中间隐藏层的方面级情感分析模型。首先,将评论和方面信息拼接为句子对输入BERT模型,通过BERT的自注意力机制建立评论与方面信息的联系;其次,构建门控卷积网络(gated convolutional neural network,GCNN)对BERT所有隐藏层输出的词向量矩阵进行特征提取,并将提取的特征进行最大池化、拼接得到特征序列;然后,使用双向门控循环单元(bidirectional gated recurrent unit,BiGRU)网络对特征序列进行融合,编码BERT不同隐藏层的信息;最后,引入注意力机制,根据特征与方面信息的相关程度赋予权值。在公开的SemEval2014 Task4评论数据集上的实验结果表明:所提模型在准确率和F 1值两种评价指标上均优于BERT、CapsBERT(capsule BERT)、BERT-PT(BERT post train)、BERT-LSTM(BERT long and short-term memory)等对比模型,具有较好的情感分类效果。展开更多
文摘针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transformers,BERT)与生成对抗网络(generative adversarial network,GAN)的陆空通话意图信息挖掘方法,并引入航班池信息对提取的部分信息进行校验修正,形成空中交通管制(air traffic control,ATC)系统可理解的结构化信息。首先,使用改进的GAN模型进行陆空通话智能文本生成,可有效进行数据增强,平衡各类实体信息分布并扩充数据集;然后,根据欧洲单一天空空中交通管理项目定义的本体规则进行意图的分类与标注;之后,通过BERT预训练模型生成字向量并解决一词多义问题,利用双向长短时记忆(bidirectional long short-term memory,BiLSTM)网络双向编码提取上下句语义特征,同时将该语义特征送入条件随机场(conditional random field,CRF)模型进行推理预测,学习标签的依赖关系并加以约束,以获取全局最优结果;最后,根据编辑距离(edit distance,ED)算法进行意图信息合理性校验与修正。对比实验结果表明,所提方法的宏平均F_(1)值达到了98.75%,在民航陆空通话数据集上的意图挖掘性能优于其他主流模型,为其加入数字化进程奠定了基础。
文摘中文电子病历命名实体识别主要是研究电子病历病程记录文书数据集,文章提出对医疗手术麻醉文书数据集进行命名实体识别的研究。利用轻量级来自Transformer的双向编码器表示(A Lite Bidirectional Encoder Representation from Transformers,ALBERT)预训练模型微调数据集和Tranfomers中的trainer训练器训练模型的方法,实现在医疗手术麻醉文书上识别手术麻醉事件命名实体与获取复杂麻醉医疗质量控制指标值。文章为医疗手术麻醉文书命名实体识别提供了可借鉴的思路,并且为计算复杂麻醉医疗质量控制指标值提供了一种新的解决方案。
文摘首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.
文摘针对油气领域知识图谱构建过程中命名实体识别使用传统方法存在实体特征信息提取不准确、识别效率低的问题,提出了一种基于BERT-BiLSTM-CRF模型的命名实体识别研究方法。该方法首先利用BERT(bidirectional encoder representations from transformers)预训练模型得到输入序列语义的词向量;然后将训练后的词向量输入双向长短期记忆网络(bi-directional long short-term memory,BiLSTM)模型进一步获取上下文特征;最后根据条件随机场(conditional random fields,CRF)的标注规则和序列解码能力输出最大概率序列标注结果,构建油气领域命名实体识别模型框架。将BERT-BiLSTM-CRF模型与其他2种命名实体识别模型(BiLSTM-CRF、BiLSTM-Attention-CRF)在包括3万多条文本语料数据、4类实体的自建数据集上进行了对比实验。实验结果表明,BERT-BiLSTM-CRF模型的准确率(P)、召回率(R)和F_(1)值分别达到91.3%、94.5%和92.9%,实体识别效果优于其他2种模型。
基金funded by Scientific Research Deanship at University of Ha’il-Saudi Arabia through Project Number RG-23092。
文摘Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
基金The National Key R&D Program of China supported this study(2017YFC1700303).
文摘Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs.
文摘针对双向长短时记忆网络-条件随机场(bi-directional long short-term memory-conditional random field,BiLSTM-CRF)模型存在准确率低和向量无法表示上下文的问题,提出一种改进的中文命名实体识别模型。利用裁剪的双向编码器表征模型(bidirectional encoder representations from transformers,BERT)得到包含上下文信息的语义向量;输入双向门控循环单元(bidirectional gated recurrent unit,BiGRU)网络及多头自注意力层捕获序列的全局和局部特征;通过条件随机场(conditional random field,CRF)层进行序列解码标注,提取出命名实体。在人民日报和微软亚洲研究院(Microsoft research Asia,MSRA)数据集上的实验结果表明,改进模型在识别效果和速度方面都有一定提高;对BERT模型内在机理的分析表明,BERT模型主要依赖从低层和中层学习到的短语及语法信息完成命名实体识别(named entity recognition,NER)任务。
文摘现有的基于BERT(bidirectional encoder representations from transformers)的方面级情感分析模型仅使用BERT最后一层隐藏层的输出,忽略BERT中间隐藏层的语义信息,存在信息利用不充分的问题,提出一种融合BERT中间隐藏层的方面级情感分析模型。首先,将评论和方面信息拼接为句子对输入BERT模型,通过BERT的自注意力机制建立评论与方面信息的联系;其次,构建门控卷积网络(gated convolutional neural network,GCNN)对BERT所有隐藏层输出的词向量矩阵进行特征提取,并将提取的特征进行最大池化、拼接得到特征序列;然后,使用双向门控循环单元(bidirectional gated recurrent unit,BiGRU)网络对特征序列进行融合,编码BERT不同隐藏层的信息;最后,引入注意力机制,根据特征与方面信息的相关程度赋予权值。在公开的SemEval2014 Task4评论数据集上的实验结果表明:所提模型在准确率和F 1值两种评价指标上均优于BERT、CapsBERT(capsule BERT)、BERT-PT(BERT post train)、BERT-LSTM(BERT long and short-term memory)等对比模型,具有较好的情感分类效果。