期刊文献+
共找到182篇文章
< 1 2 10 >
每页显示 20 50 100
ALBERT预训练模型在医疗文书命名实体识别中的应用研究
1
作者 庞秋奔 李银 《信息与电脑》 2024年第6期152-156,共5页
中文电子病历命名实体识别主要是研究电子病历病程记录文书数据集,文章提出对医疗手术麻醉文书数据集进行命名实体识别的研究。利用轻量级来自Transformer的双向编码器表示(A Lite Bidirectional Encoder Representation from Transform... 中文电子病历命名实体识别主要是研究电子病历病程记录文书数据集,文章提出对医疗手术麻醉文书数据集进行命名实体识别的研究。利用轻量级来自Transformer的双向编码器表示(A Lite Bidirectional Encoder Representation from Transformers,ALBERT)预训练模型微调数据集和Tranfomers中的trainer训练器训练模型的方法,实现在医疗手术麻醉文书上识别手术麻醉事件命名实体与获取复杂麻醉医疗质量控制指标值。文章为医疗手术麻醉文书命名实体识别提供了可借鉴的思路,并且为计算复杂麻醉医疗质量控制指标值提供了一种新的解决方案。 展开更多
关键词 命名实体识别 轻量级来自Transformer的双向编码器表示(albert)模型 transformers 麻醉医疗质量控制指标 医疗手术麻醉文书
下载PDF
基于ALBERT的网络威胁情报命名实体识别 被引量:1
2
作者 周景贤 王曾琪 《陕西科技大学学报》 北大核心 2023年第1期187-195,共9页
网络威胁情报实体识别是网络威胁情报分析的关键,针对传统词嵌入无法表征一词多义而难以有效识别网络威胁情报实体关键信息,同时面临指数级增长的威胁情报,识别模型的效率亟待提高等问题,提出一种基于ALBERT的网络威胁情报命名实体识别... 网络威胁情报实体识别是网络威胁情报分析的关键,针对传统词嵌入无法表征一词多义而难以有效识别网络威胁情报实体关键信息,同时面临指数级增长的威胁情报,识别模型的效率亟待提高等问题,提出一种基于ALBERT的网络威胁情报命名实体识别模型.该模型首先使用ALBERT提取威胁情报动态特征词向量,然后将特征词向量输入到双向长短期记忆网络(BiLSTM)层得到句子中每个词对应的标签,最后在条件随机场(CRF)层修正并以最大概率输出序列标签.识别模型对比实验结果显示,提出模型的F1值为92.21%,明显优于其他模型.在识别准确率相同的情况下,提出模型的时间和资源成本也较低,适用于网络威胁情报领域海量高效的实体识别任务. 展开更多
关键词 网络威胁情报 命名实体识别 BERT albert 双向长短期记忆网络 条件随机场
下载PDF
基于MacBERT与对抗训练的机器阅读理解模型
3
作者 周昭辰 方清茂 +2 位作者 吴晓红 胡平 何小海 《计算机工程》 CAS CSCD 北大核心 2024年第5期41-50,共10页
机器阅读理解旨在让机器像人类一样理解自然语言文本,并据此进行问答任务。近年来,随着深度学习和大规模数据集的发展,机器阅读理解引起了广泛关注,但是在实际应用中输入的问题通常包含各种噪声和干扰,这些噪声和干扰会影响模型的预测... 机器阅读理解旨在让机器像人类一样理解自然语言文本,并据此进行问答任务。近年来,随着深度学习和大规模数据集的发展,机器阅读理解引起了广泛关注,但是在实际应用中输入的问题通常包含各种噪声和干扰,这些噪声和干扰会影响模型的预测结果。为了提高模型的泛化能力和鲁棒性,提出一种基于掩码校正的来自Transformer的双向编码器表示(Mac BERT)与对抗训练(AT)的机器阅读理解模型。首先利用Mac BERT对输入的问题和文本进行词嵌入转化为向量表示;然后根据原始样本反向传播的梯度变化在原始词向量上添加微小扰动生成对抗样本;最后将原始样本和对抗样本输入双向长短期记忆(Bi LSTM)网络进一步提取文本的上下文特征,输出预测答案。实验结果表明,该模型在简体中文数据集CMRC2018上的F1值和精准匹配(EM)值分别较基线模型提高了1.39和3.85个百分点,在繁体中文数据集DRCD上的F1值和EM值分别较基线模型提高了1.22和1.71个百分点,在英文数据集SQu ADv1.1上的F1值和EM值分别较基线模型提高了2.86和1.85个百分点,优于已有的大部分机器阅读理解模型,并且在真实问答结果上与基线模型进行对比,结果验证了该模型具有更强的鲁棒性和泛化能力,在输入的问题存在噪声的情况下性能更好。 展开更多
关键词 机器阅读理解 对抗训练 预训练模型 掩码校正的来自Transformer的双向编码器表示 双向长短期记忆网络
下载PDF
BSTFNet:An Encrypted Malicious Traffic Classification Method Integrating Global Semantic and Spatiotemporal Features 被引量:1
4
作者 Hong Huang Xingxing Zhang +2 位作者 Ye Lu Ze Li Shaohua Zhou 《Computers, Materials & Continua》 SCIE EI 2024年第3期3929-3951,共23页
While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning me... While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic. 展开更多
关键词 Encrypted malicious traffic classification bidirectional encoder representations from transformers text convolutional neural network bidirectional gated recurrent unit
下载PDF
Traditional Chinese Medicine Synonymous Term Conversion:A Bidirectional Encoder Representations from Transformers-Based Model for Converting Synonymous Terms in Traditional Chinese Medicine
5
作者 Lu Zhou Chao-Yong Wu +10 位作者 Xi-Ting Wang Shuang-Qiao Liu Yi-Zhuo Zhang Yue-Meng Sun Jian Cui Cai-Yan Li Hui-Min Yuan Yan Sun Feng-Jie Zheng Feng-Qin Xu Yu-Hang Li 《World Journal of Traditional Chinese Medicine》 CAS CSCD 2023年第2期224-233,共10页
Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of m... Background:The medical records of traditional Chinese medicine(TCM)contain numerous synonymous terms with different descriptions,which is not conducive to computer-aided data mining of TCM.However,there is a lack of models available to normalize synonymous TCM terms.Therefore,construction of a synonymous term conversion(STC)model for normalizing synonymous TCM terms is necessary.Methods:Based on the neural networks of bidirectional encoder representations from transformers(BERT),four types of TCM STC models were designed:Models based on BERT and text classification,text sequence generation,named entity recognition,and text matching.The superior STC model was selected on the basis of its performance in converting synonymous terms.Moreover,three misjudgment inspection methods for the conversion results of the STC model based on inconsistency were proposed to find incorrect term conversion:Neuron random deactivation,output comparison of multiple isomorphic models,and output comparison of multiple heterogeneous models(OCMH).Results:The classification-based STC model outperformed the other STC task models.It achieved F1 scores of 0.91,0.91,and 0.83 for performing symptoms,patterns,and treatments STC tasks,respectively.The OCMH method showed the best performance in misjudgment inspection,with wrong detection rates of 0.80,0.84,and 0.90 in the term conversion results for symptoms,patterns,and treatments,respectively.Conclusion:The TCM STC model based on classification achieved superior performance in converting synonymous terms for symptoms,patterns,and treatments.The misjudgment inspection method based on OCMH showed superior performance in identifying incorrect outputs. 展开更多
关键词 bidirectional encoder representations from transformers misjudgment inspection synonymous term conversion traditional Chinesem edicine
原文传递
Text Augmentation-Based Model for Emotion Recognition Using Transformers
6
作者 Fida Mohammad Mukhtaj Khan +4 位作者 Safdar Nawaz Khan Marwat Naveed Jan Neelam Gohar Muhammad Bilal Amal Al-Rasheed 《Computers, Materials & Continua》 SCIE EI 2023年第9期3523-3547,共25页
Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their... Emotion Recognition in Conversations(ERC)is fundamental in creating emotionally intelligentmachines.Graph-BasedNetwork(GBN)models have gained popularity in detecting conversational contexts for ERC tasks.However,their limited ability to collect and acquire contextual information hinders their effectiveness.We propose a Text Augmentation-based computational model for recognizing emotions using transformers(TA-MERT)to address this.The proposed model uses the Multimodal Emotion Lines Dataset(MELD),which ensures a balanced representation for recognizing human emotions.Themodel used text augmentation techniques to producemore training data,improving the proposed model’s accuracy.Transformer encoders train the deep neural network(DNN)model,especially Bidirectional Encoder(BE)representations that capture both forward and backward contextual information.This integration improves the accuracy and robustness of the proposed model.Furthermore,we present a method for balancing the training dataset by creating enhanced samples from the original dataset.By balancing the dataset across all emotion categories,we can lessen the adverse effects of data imbalance on the accuracy of the proposed model.Experimental results on the MELD dataset show that TA-MERT outperforms earlier methods,achieving a weighted F1 score of 62.60%and an accuracy of 64.36%.Overall,the proposed TA-MERT model solves the GBN models’weaknesses in obtaining contextual data for ERC.TA-MERT model recognizes human emotions more accurately by employing text augmentation and transformer-based encoding.The balanced dataset and the additional training samples also enhance its resilience.These findings highlight the significance of transformer-based approaches for special emotion recognition in conversations. 展开更多
关键词 Emotion recognition in conversation graph-based network text augmentation-basedmodel multimodal emotion lines dataset bidirectional encoder representation for transformer
下载PDF
基于BERT-GAT表示学习的问答社区最佳回答者推荐
7
作者 夏文宗 赵海燕 +1 位作者 曹健 陈庆奎 《小型微型计算机系统》 CSCD 北大核心 2024年第7期1656-1662,共7页
在问答社区中,每天都会出现大量新的问题,为新问题推荐合适的回答者有助于加快问题的解决并促进社区发展.然而,目前最佳回答者推荐大多基于用户历史回复记录或文本匹配进行推荐,而用户是否回答某一问题与多方因素有关,特别是问题与用户... 在问答社区中,每天都会出现大量新的问题,为新问题推荐合适的回答者有助于加快问题的解决并促进社区发展.然而,目前最佳回答者推荐大多基于用户历史回复记录或文本匹配进行推荐,而用户是否回答某一问题与多方因素有关,特别是问题与用户擅长的知识领域是否匹配有关.因此,本文根据用户回答文本构建的社区知识语料库并对BERT模型进行微调,结合用户社区行为记录和回答赞同数等辅助信息,基于LightGBM模型进行最佳回答者的推荐.在实验中,利用Precision、MRR和Hit指标分析预测结果,结果表明,本文提出基于BERT-GAT表示学习的LightGBM最佳回答者推荐模型在StackExchange三个流行社区中均取得了较好的性能. 展开更多
关键词 问答社区 图注意力网络 BERT 评论网络 Light GBM
下载PDF
Enhanced Topic-Aware Summarization Using Statistical Graph Neural Networks
8
作者 Ayesha Khaliq Salman Afsar Awan +2 位作者 Fahad Ahmad Muhammad Azam Zia Muhammad Zafar Iqbal 《Computers, Materials & Continua》 SCIE EI 2024年第8期3221-3242,共22页
The rapid expansion of online content and big data has precipitated an urgent need for efficient summarization techniques to swiftly comprehend vast textual documents without compromising their original integrity.Curr... The rapid expansion of online content and big data has precipitated an urgent need for efficient summarization techniques to swiftly comprehend vast textual documents without compromising their original integrity.Current approaches in Extractive Text Summarization(ETS)leverage the modeling of inter-sentence relationships,a task of paramount importance in producing coherent summaries.This study introduces an innovative model that integrates Graph Attention Networks(GATs)with Transformer-based Bidirectional Encoder Representa-tions from Transformers(BERT)and Latent Dirichlet Allocation(LDA),further enhanced by Term Frequency-Inverse Document Frequency(TF-IDF)values,to improve sentence selection by capturing comprehensive topical information.Our approach constructs a graph with nodes representing sentences,words,and topics,thereby elevating the interconnectivity and enabling a more refined understanding of text structures.This model is stretched to Multi-Document Summarization(MDS)from Single-Document Summarization,offering significant improvements over existing models such as THGS-GMM and Topic-GraphSum,as demonstrated by empirical evaluations on benchmark news datasets like Cable News Network(CNN)/Daily Mail(DM)and Multi-News.The results consistently demonstrate superior performance,showcasing the model’s robustness in handling complex summarization tasks across single and multi-document contexts.This research not only advances the integration of BERT and LDA within a GATs but also emphasizes our model’s capacity to effectively manage global information and adapt to diverse summarization challenges. 展开更多
关键词 SUMMaRIZaTION graph attention network bidirectional encoder representations from transformers Latent Dirichlet allocation term frequency-inverse document frequency
下载PDF
Enhancing Arabic Cyberbullying Detection with End-to-End Transformer Model
9
作者 Mohamed A.Mahdi Suliman Mohamed Fati +2 位作者 Mohamed A.G.Hazber Shahanawaj Ahamad Sawsan A.Saad 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1651-1671,共21页
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ... Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities. 展开更多
关键词 CYBERBULLYING offensive detection bidirectional encoder representations from the transformers(BERT) continuous bag of words Social Media natural language processing
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
10
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
基于BERT-BGRU-Att模型的中文文本情感分析 被引量:1
11
作者 胡俊玮 于青 《天津理工大学学报》 2024年第3期85-90,共6页
针对传统情感分析任务中,使用Word2Vec(word to vector)模型生成文本词向量无法有效解决多义词表征,经典神经网络模型无法充分提取语义特征的问题,文中提出了基于BERT(bidirectional encoder representations from transformers)和双向... 针对传统情感分析任务中,使用Word2Vec(word to vector)模型生成文本词向量无法有效解决多义词表征,经典神经网络模型无法充分提取语义特征的问题,文中提出了基于BERT(bidirectional encoder representations from transformers)和双向门控循环单元(bidirectional gated recurrent unit,BGRU)以及注意力机制(attention mechanism,Att)的BERT-BGRU-Att中文文本情感分析模型。首先通过BERT预训练模型将中文文本转变为矩阵向量的表示形式,然后建立融合注意力机制的BGRU神经网络对文本信息进行特征提取,最后将信息特征按照不同的权重输入到Softmax分类器进行预测。在网络评论数据集上进行实验,结果显示该模型的预测效果优于相关网络模型,表明该模型在中文文本情感分析任务中的有效性。 展开更多
关键词 情感分析 BERT 双向门控循环单元 注意力机制
下载PDF
融合SikuBERT模型与MHA的古汉语命名实体识别 被引量:1
12
作者 陈雪松 詹子依 王浩畅 《吉林大学学报(信息科学版)》 CAS 2023年第5期866-875,共10页
针对传统的命名实体识别方法无法充分学习古汉语复杂的句子结构信息以及在长序列特征提取过程中容易带来信息损失的问题,提出一种融合SikuBERT(Siku Bidirectional Encoder Representation from Transformers)模型与MHA(Multi-Head Atte... 针对传统的命名实体识别方法无法充分学习古汉语复杂的句子结构信息以及在长序列特征提取过程中容易带来信息损失的问题,提出一种融合SikuBERT(Siku Bidirectional Encoder Representation from Transformers)模型与MHA(Multi-Head Attention)的古汉语命名实体识别方法。首先,利用SikuBERT模型对古汉语语料进行预训练,将训练得到的信息向量输入BiLSTM(Bidirectional Long Short-Term Memory)网络中提取特征,再将BiLSTM层的输出特征通过MHA分配不同的权重减少长序列的信息损失,最后通过CRF(Conditional Random Field)解码得到预测的序列标签。实验表明,与常用的BiLSTM-CRF、 BERT-BiLSTM-CRF等模型相比,该方法的F_(1)值有显著提升,证明了该方法能有效提升古汉语命名实体识别的效果。 展开更多
关键词 古汉语 命名实体识别 SikuBERT模型 多头注意力机制
下载PDF
基于MCA-BERT的数学文本分类方法 被引量:2
13
作者 杨先凤 龚睿 李自强 《计算机工程与设计》 北大核心 2023年第8期2312-2319,共8页
为尽可能地提高数学文本分类的效果,通过构建数学文本数据集并对该数据集进行分析,提出增强文本实体信息的多通道注意力机制-Transformers的双向编码器表示(MCA-BERT)模型。通过Word2vec词向量的平均池化获得句子级的实体信息,通过注意... 为尽可能地提高数学文本分类的效果,通过构建数学文本数据集并对该数据集进行分析,提出增强文本实体信息的多通道注意力机制-Transformers的双向编码器表示(MCA-BERT)模型。通过Word2vec词向量的平均池化获得句子级的实体信息,通过注意力机制给不同词赋予不同权重,获得词语级的实体信息,将两类实体信息与BERT输出的上下文信息拼接,通过Softmax层得到分类结果。该方法在数学文本数据集上的F1值相比BERT单通道的方法提高了2.1个百分点。实验结果说明,该方法能够有效增强文本实体信息,获得更好的分类效果。 展开更多
关键词 数学文本分类 实体信息 注意力机制 多通道 双向编码器表示 词向量 分类器
下载PDF
基于Deep Q-Learning的抽取式摘要生成方法
14
作者 王灿宇 孙晓海 +4 位作者 吴叶辉 季荣彪 李亚东 张少如 杨士豪 《吉林大学学报(信息科学版)》 CAS 2023年第2期306-314,共9页
为解决训练过程中需要句子级标签的问题,提出一种基于深度强化学习的无标签抽取式摘要生成方法,将文本摘要转化为Q-learning问题,并利用DQN(Deep Q-Network)学习Q函数。为有效表示文档,利用BERT(Bidirectional Encoder Representations ... 为解决训练过程中需要句子级标签的问题,提出一种基于深度强化学习的无标签抽取式摘要生成方法,将文本摘要转化为Q-learning问题,并利用DQN(Deep Q-Network)学习Q函数。为有效表示文档,利用BERT(Bidirectional Encoder Representations from Transformers)作为句子编码器,Transformer作为文档编码器。解码器充分考虑了句子的信息富集度、显著性、位置重要性以及其与当前摘要之间的冗余程度等重要性等信息。该方法在抽取摘要时不需要句子级标签,可显著减少标注工作量。实验结果表明,该方法在CNN(Cable News Network)/DailyMail数据集上取得了最高的Rouge-L(38.35)以及可比较的Rouge-1(42.07)和Rouge-2(18.32)。 展开更多
关键词 抽取式文本摘要 BERT模型 编码器 深度强化学习
下载PDF
结合BERT与BiGRU-Attention-CRF模型的地质命名实体识别 被引量:11
15
作者 谢雪景 谢忠 +5 位作者 马凯 陈建国 邱芹军 李虎 潘声勇 陶留锋 《地质通报》 CAS CSCD 北大核心 2023年第5期846-855,共10页
从地质文本中提取地质命名实体,对地质大数据的深度挖掘与应用具有重要意义。定义了地质命名实体的概念并制订了标注规范,设计了地质实体对象化表达模型。地质文本存在大量长实体、复杂嵌套实体,增加了地质命名实体识别的挑战性。针对... 从地质文本中提取地质命名实体,对地质大数据的深度挖掘与应用具有重要意义。定义了地质命名实体的概念并制订了标注规范,设计了地质实体对象化表达模型。地质文本存在大量长实体、复杂嵌套实体,增加了地质命名实体识别的挑战性。针对上述问题,①引入BERT模型生成顾及上下文信息的高质量词向量表征;②采用双向门控循环单元-注意力机制-条件随机场(BiGRU-Attention-CRF)对前一层输出的语义编码进行序列标注与解码。通过与主流深度学习模型进行对比,该模型的F1值为84.02%,均比其他模型表现出更优异的性能,能在小规模地质语料库上有较好的识别效果。 展开更多
关键词 命名实体识别 地质命名实体 BERT 注意力机制 BiGRU
下载PDF
基于BERT与Loc-Attention的文本情感分析模型 被引量:1
16
作者 何传鹏 黄勃 +3 位作者 周科亮 尹玲 王明胜 李佩佩 《传感器与微系统》 CSCD 北大核心 2023年第12期146-150,共5页
传统的情感分析方法由于没有关注文本相对于主题词的位置(Loc)关系,分类效果并不理想。提出一种基于BERT与LDA的Loc-注意力(Attention)的双向长短期记忆(Bi-LSTM)模型的文本情感分析方法——BL-LABL方法。使用LDA主题模型获得每个评论... 传统的情感分析方法由于没有关注文本相对于主题词的位置(Loc)关系,分类效果并不理想。提出一种基于BERT与LDA的Loc-注意力(Attention)的双向长短期记忆(Bi-LSTM)模型的文本情感分析方法——BL-LABL方法。使用LDA主题模型获得每个评论的主题及其词分布,将筛选出的主题词和原文本拼接输入到BERT模型,进行词向量训练,得到包含主题信息的文本词向量以及包含文本信息的主题词向量;利用Bi-LSTM网络,加入文本的位置权重,结合注意力权重最终得到的文本特征表示为两者的加权求和;最后,再利用SoftMax分类器获得文本的情感类别。通过在两种数据集上的实验表明,该模型与传统的注意力情感分类模型相比,有效地提高了分类性能。 展开更多
关键词 情感分析 主题模型 BERT模型 文本特征 位置权重 注意力
下载PDF
A Novel Named Entity Recognition Scheme for Steel E-Commerce Platforms Using a Lite BERT 被引量:1
17
作者 Maojian Chen Xiong Luo +2 位作者 Hailun Shen Ziyang Huang Qiaojuan Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第10期47-63,共17页
In the era of big data,E-commerce plays an increasingly important role,and steel E-commerce certainly occupies a positive position.However,it is very difficult to choose satisfactory steel raw materials from diverse s... In the era of big data,E-commerce plays an increasingly important role,and steel E-commerce certainly occupies a positive position.However,it is very difficult to choose satisfactory steel raw materials from diverse steel commodities online on steel E-commerce platforms in the purchase of staffs.In order to improve the efficiency of purchasers searching for commodities on the steel E-commerce platforms,we propose a novel deep learning-based loss function for named entity recognition(NER).Considering the impacts of small sample and imbalanced data,in our NER scheme,the focal loss,the label smoothing,and the cross entropy are incorporated into a lite bidirectional encoder representations from transformers(BERT)model to avoid the over-fitting.Moreover,through the analysis of different classic annotation techniques used to tag data,an ideal one is chosen for the training model in our proposed scheme.Experiments are conducted on Chinese steel E-commerce datasets.The experimental results show that the training time of a lite BERT(ALBERT)-based method is much shorter than that of BERT-based models,while achieving the similar computational performance in terms of metrics precision,recall,and F1 with BERT-based models.Meanwhile,our proposed approach performs much better than that of combining Word2Vec,bidirectional long short-term memory(Bi-LSTM),and conditional random field(CRF)models,in consideration of training time and F1. 展开更多
关键词 Named entity recognition bidirectional encoder representations from transformers steel E-commerce platform annotation technique
下载PDF
Deep-BERT:Transfer Learning for Classifying Multilingual Offensive Texts on Social Media 被引量:3
18
作者 Md.Anwar Hussen Wadud M.F.Mridha +2 位作者 Jungpil Shin Kamruddin Nur Aloke Kumar Saha 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1775-1791,共17页
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ... Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts. 展开更多
关键词 Offensive text classification deep convolutional neural network(DCNN) bidirectional encoder representations from transformers(BERT) natural language processing(NLP)
下载PDF
基于BERT-GAT-CorNet多标签中文短文本分类方法 被引量:2
19
作者 刘新忠 赵澳庆 +1 位作者 谢文武 杨志和 《计算机应用》 CSCD 北大核心 2023年第S02期18-21,共4页
多标签文本分类问题是多标签分类的一个重要内容,传统的多标签文本分类算法往往只关注文本本身的信息而无法理解深层语义信息,也未考虑标签之间的关系。为了解决这些问题,提出了融合BERT(Bidirectional Encoder Representation from Tra... 多标签文本分类问题是多标签分类的一个重要内容,传统的多标签文本分类算法往往只关注文本本身的信息而无法理解深层语义信息,也未考虑标签之间的关系。为了解决这些问题,提出了融合BERT(Bidirectional Encoder Representation from Transformers)-GAT(Graph Attention neTwork)-CorNet(Correlation Network)的多标签文本分类模型。首先,通过预训练模型BERT表示文本的特征向量,并用生成的特征向量建立图结构数据;接着,用GAT来为不同节点分配不同的权重;最后,通过Softmax-CorNet学习标签相关性增强预测并分类。所提模型在今日头条子数据集(TNEWS)和KUAKE-QIC数据集上的准确率分别为93.3%和83.2%,通过对比实验表明,所提模型在多标签文本分类任务上性能得到了有效提升。 展开更多
关键词 多标签文本分类 预训练模型 图结构数据 标签相关性 BERT 图注意网络 CorNet
下载PDF
End-to-end aspect category sentiment analysis based on type graph convolutional networks
20
作者 邵清 ZHANG Wenshuang WANG Shaojun 《High Technology Letters》 EI CAS 2023年第3期325-334,共10页
For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural net... For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model. 展开更多
关键词 aspect-based sentiment analysis(aBSa) bidirectional encoder representation from transformers(BERT) type graph convolutional network(TGCN) aspect category and senti-ment pair extraction
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部