期刊文献+
共找到75篇文章
< 1 2 4 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
2
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
基于RoBERTa和图增强Transformer的序列推荐方法 被引量:2
3
作者 王明虎 石智奎 +1 位作者 苏佳 张新生 《计算机工程》 CAS CSCD 北大核心 2024年第4期121-131,共11页
自推荐系统出现以来,有限的数据信息就一直制约着推荐算法的进一步发展。为降低数据稀疏性的影响,增强非评分数据的利用率,基于神经网络的文本推荐模型相继被提出,但主流的卷积或循环神经网络在文本语义理解和长距离关系捕捉方面存在明... 自推荐系统出现以来,有限的数据信息就一直制约着推荐算法的进一步发展。为降低数据稀疏性的影响,增强非评分数据的利用率,基于神经网络的文本推荐模型相继被提出,但主流的卷积或循环神经网络在文本语义理解和长距离关系捕捉方面存在明显劣势。为了更好地挖掘用户与商品之间的深层潜在特征,进一步提高推荐质量,提出一种基于Ro BERTa和图增强Transformer的序列推荐(RGT)模型。引入评论文本数据,首先利用预训练的Ro BERTa模型捕获评论文本中的字词语义特征,初步建模用户的个性化兴趣,然后根据用户与商品的历史交互信息,构建具有时序特性的商品关联图注意力机制网络模型,通过图增强Transformer的方法将图模型学习到的各个商品的特征表示以序列的形式输入Transformer编码层,最后将得到的输出向量与之前捕获的语义表征以及计算得到的商品关联图的全图表征输入全连接层,以捕获用户全局的兴趣偏好,实现用户对商品的预测评分。在3组真实亚马逊公开数据集上的实验结果表明,与Deep FM、Conv MF等经典文本推荐模型相比,RGT模型在均方根误差(RMSE)和平均绝对误差(MAE)2种指标上有显著提升,相较于最优对比模型最高分别提升4.7%和5.3%。 展开更多
关键词 推荐算法 评论文本 roberta模型 图注意力机制 Transformer机制
下载PDF
Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
4
作者 Zhixiang Ji Xiaohui Wang +1 位作者 Jie Zhang Di Wu 《Global Energy Interconnection》 EI CSCD 2023年第4期493-504,共12页
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power... With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid. 展开更多
关键词 Power-grid dispatch fault handling Knowledge graph pre-trained model Auxiliary decision-making
下载PDF
基于RoBERTa多特征融合的棉花病虫害命名实体识别
5
作者 李东亚 白涛 +3 位作者 香慧敏 戴硕 王震鲁 陈珍 《河南农业科学》 北大核心 2024年第2期152-161,共10页
针对棉花病虫害文本语料数据匮乏且缺少中文命名实体识别语料库,棉花病虫害实体内容复杂、类型多样且分布不均等问题,构建了包含11种类别的棉花病虫害中文实体识别语料库CDIPNER,提出了一种基于RoBERTa多特征融合的命名实体识别模型。... 针对棉花病虫害文本语料数据匮乏且缺少中文命名实体识别语料库,棉花病虫害实体内容复杂、类型多样且分布不均等问题,构建了包含11种类别的棉花病虫害中文实体识别语料库CDIPNER,提出了一种基于RoBERTa多特征融合的命名实体识别模型。该模型采用掩码学习能力更强的RoBERTa预训练模型进行字符级嵌入向量转换,通过BiLSTM和IDCNN模型联合抽取特征向量,分别捕捉文本的时序和空间特征,使用多头自注意力机制将抽取的特征向量进行融合,最后利用CRF算法生成预测序列。结果表明,该模型对于棉花病虫害文本中命名实体的识别精确率为96.60%,召回率为95.76%,F1值为96.18%;在ResumeNER等公开数据集上也有较好的效果。表明该模型能有效地识别棉花病虫害命名实体且具有一定的泛化能力。 展开更多
关键词 棉花 病虫害 roberta模型 命名实体识别 多特征融合 多头注意力机制
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
6
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Research status and application of artificial intelligence large models in the oil and gas industry
7
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
8
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
基于RoBERTa和集中注意力机制的营商政策多标签分类
9
作者 陈昊飏 《计算机应用》 CSCD 北大核心 2024年第S01期44-48,共5页
为了满足营商政策多标签分类的社会需求,解决使用擅长文本分类、但输入受限的大语言预训练模型进行长文本分类的难题,提出一种基于RoBERTa模型和集中注意力机制的方法,更好地提取语义集中区域的信息表征,对营商政策文本进行有效的多标... 为了满足营商政策多标签分类的社会需求,解决使用擅长文本分类、但输入受限的大语言预训练模型进行长文本分类的难题,提出一种基于RoBERTa模型和集中注意力机制的方法,更好地提取语义集中区域的信息表征,对营商政策文本进行有效的多标签分类。首先,对数据清洗和分析后,得到一定的先验知识:营商政策文本的语义表征集中在文本标题与开篇部分。其次,在文本输入层和向量表示层中,构建集中注意力机制对文本和向量进行处理,增强模型在训练中对语义集中区域的注意力,提高模型信息表征提取能力,优化长文本分类的效果。实验中爬取政府公开的营商政策文本作为数据集,实验结果表明,营商政策长文本分类的准确率可达0.95,Micro-F1值可达0.91,同时对比实验结果显示,融合RoBERTa和集中注意力机制进行营商政策长文本多标签分类比其他模型效果更好。 展开更多
关键词 多标签分类 长文本 营商政策 roberta 预训练模型 注意力机制
下载PDF
基于RoBERTa-WWM模型的中文电子病历命名实体识别研究
10
作者 刘慧敏 黄霞 +1 位作者 熊菲 王国庆 《长江信息通信》 2024年第3期7-9,共3页
在应对中文电子病历文本分析时,面临着一词多义、识别不完整等挑战。为此,构建了RoBERTa-WWM模型与BiLSTM-CRF模块相结合的深度学习框架。首先,将经过预训练的RoBERTa-WWM语言模型与Transformer层产生的语义特征进行深度融合,以捕获文... 在应对中文电子病历文本分析时,面临着一词多义、识别不完整等挑战。为此,构建了RoBERTa-WWM模型与BiLSTM-CRF模块相结合的深度学习框架。首先,将经过预训练的RoBERTa-WWM语言模型与Transformer层产生的语义特征进行深度融合,以捕获文本的复杂语境信息。接着,将融合后的语义表示输入至BiLSTM以及CRF模块,进一步细化了实体的辨识范围与准确性。最后,在CCKS2019数据集上进行了实证分析,F1值高达82.94%。这一数据有力地证实了RoBERTa-WWM-BiLSTM-CRF模型在中文电子病历命名实体的识别工作上的优越性能。 展开更多
关键词 roberta-WWM模型 中文电子病历 实体识别
下载PDF
A Classification–Detection Approach of COVID-19 Based on Chest X-ray and CT by Using Keras Pre-Trained Deep Learning Models 被引量:10
11
作者 Xing Deng Haijian Shao +2 位作者 Liang Shi Xia Wang Tongling Xie 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第11期579-596,共18页
The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight agai... The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight against COVID-19,is to examine the patient’s lungs based on the Chest X-ray and CT generated by radiation imaging.In this paper,five keras-related deep learning models:ResNet50,InceptionResNetV2,Xception,transfer learning and pre-trained VGGNet16 is applied to formulate an classification-detection approaches of COVID-19.Two benchmark methods SVM(Support Vector Machine),CNN(Conventional Neural Networks)are provided to compare with the classification-detection approaches based on the performance indicators,i.e.,precision,recall,F1 scores,confusion matrix,classification accuracy and three types of AUC(Area Under Curve).The highest classification accuracy derived by classification-detection based on 5857 Chest X-rays and 767 Chest CTs are respectively 84%and 75%,which shows that the keras-related deep learning approaches facilitate accurate and effective COVID-19-assisted detection. 展开更多
关键词 COVID-19 detection deep learning transfer learning pre-trained models
下载PDF
基于RoBERTa-CRF的肝癌电子病历实体识别研究 被引量:3
12
作者 邓嘉乐 胡振生 +2 位作者 连万民 华赟鹏 周毅 《医学信息学杂志》 CAS 2023年第6期42-47,共6页
目的/意义肝癌电子病历中蕴涵大量医学专业知识,且大部分以非结构化数据形式存在,难以自动化提取。肝癌电子病历实体识别研究有助于构建肝癌领域医疗辅助决策系统和医学知识图谱。方法/过程构建RoBERTa算法与CRF算法相结合的命名实体识... 目的/意义肝癌电子病历中蕴涵大量医学专业知识,且大部分以非结构化数据形式存在,难以自动化提取。肝癌电子病历实体识别研究有助于构建肝癌领域医疗辅助决策系统和医学知识图谱。方法/过程构建RoBERTa算法与CRF算法相结合的命名实体识别模型,利用自标注肝癌电子病历真实数据进行模型训练与测试。结果/结论RoBERTa-CRF模型优于其他基线模型,具有较好实体识别效果。 展开更多
关键词 肝癌电子病历 实体识别 知识提取 roberta-CRF模型 自然语言处理
下载PDF
融合RoBERTa的多尺度语义协同专利文本分类模型 被引量:2
13
作者 梅侠峰 吴晓鸰 +1 位作者 黄泽民 凌捷 《计算机工程与科学》 CSCD 北大核心 2023年第5期903-910,共8页
针对静态词向量工具(如word2vec)舍弃词的上下文语境信息,以及现有专利文本分类模型特征抽取能力不足等问题,提出了一种融合RoBERTa的多尺度语义协同(RoBERTa-MCNN-BiSRU++-AT)专利文本分类模型。RoBERTa能够学习到当前词符合上下文的... 针对静态词向量工具(如word2vec)舍弃词的上下文语境信息,以及现有专利文本分类模型特征抽取能力不足等问题,提出了一种融合RoBERTa的多尺度语义协同(RoBERTa-MCNN-BiSRU++-AT)专利文本分类模型。RoBERTa能够学习到当前词符合上下文的动态语义表示,解决静态词向量无法表示多义词的问题。多尺度语义协同模型利用卷积层捕获文本多尺度局部语义特征,再由双向内置注意力简单循环单元进行不同层次的上下文语义建模,将多尺度输出特征进行拼接,由注意力机制对分类结果贡献大的关键特征分配更高权重。在国家信息中心发布的专利文本数据集上进行验证,与ALBERT-BiGRU和BiLSTM-ATT-CNN相比,RoBERTa-MCNN-BiSRU++-AT部级专利的分类准确率分别提升了2.7%和5.1%,大类级专利的分类准确率分别提升了6.7%和8.4%。结果表明,RoBERTa-MCNN-BiSRU++-AT能有效提升对不同层级专利的分类准确率。 展开更多
关键词 专利文本分类 语义协同 简单循环单元 roberta模型
下载PDF
基于RoBERTa与句法信息的中文影评情感分析 被引量:3
14
作者 陈钰佳 郑更生 肖伟 《科学技术与工程》 北大核心 2023年第18期7844-7851,共8页
细粒度情感分析是自然语言处理的关键任务之一,针对现有的解决中文影评情感分析的主流方案一般使用Word2Vector等预训练模型生成静态词向量,不能很好地解决一词多义问题,并且采用CNN池化的方式提取文本特征可能造成文本信息损失造成学... 细粒度情感分析是自然语言处理的关键任务之一,针对现有的解决中文影评情感分析的主流方案一般使用Word2Vector等预训练模型生成静态词向量,不能很好地解决一词多义问题,并且采用CNN池化的方式提取文本特征可能造成文本信息损失造成学习不充分,同时未能利用文本中包含的长距离依赖信息和句子中的句法信息。因此,提出了一种新的情感分析模型RoBERTa-PWCN-GTRU。模型使用RoBERTa预训练模型生成动态文本词向量,解决一词多义问题。为充分提取利用文本信息,采用改进的网络DenseDPCNN捕获文本长距离依赖信息,并与Bi-LSTM获取到的全局语义信息以双通道的方式进行特征融合,再融入邻近加权卷积网络(proximity-weighted convolutional network,PWCN)获取到的句子句法信息,并引入门控Tanh-Relu单元(gated Tanh-Relu unit,GTRU)进行进一步的特征筛选。在构建的中文影评数据集上的实验结果表明,提出的情感分析模型较主流模型在性能上有明显提升,其在中文影评数据集上的准确率达89.67%,F 1达82.51%,通过消融实验进一步验证了模型性能的有效性。模型能够为制片方未来的电影制作和消费者的购票决策提供有用信息,具有一定的实用价值。 展开更多
关键词 中文影评 情感分析 roberta预训练模型 邻近加权卷积 门控Tanh-Relu单元
下载PDF
基于RoBERTa和多层次特征的中文事件抽取方法
15
作者 乐杨 胡军国 李耀 《电子技术应用》 2023年第11期49-54,共6页
针对中文事件抽取中语义表征不充分、特征提取不全面等问题,提出一种基于RoBERTa和多层次特征的中文事件抽取方法。通过RoBERTa预训练模型构建字向量,并基于词性标注和触发词语义信息融入进行字向量扩展;其次使用双向长短时记忆网络和... 针对中文事件抽取中语义表征不充分、特征提取不全面等问题,提出一种基于RoBERTa和多层次特征的中文事件抽取方法。通过RoBERTa预训练模型构建字向量,并基于词性标注和触发词语义信息融入进行字向量扩展;其次使用双向长短时记忆网络和卷积神经网络抽取全局特征和局部特征,并通过自注意力机制捕捉不同特征之间的关联,加强对重要特征的利用;最后通过条件随机场实现BIO序列标注,完成事件抽取。在DuEE1.0数据集上,触发词抽取和事件论元抽取的F1值达到86.9%和68.0%,优于现有常用事件抽取模型,验证了该方法的有效性。 展开更多
关键词 事件抽取 roberta预训练模型 多层次特征 自注意力机制 序列标注
下载PDF
基于图卷积神经网络和RoBERTa的物流订单分类 被引量:1
16
作者 王建兵 杨超 +2 位作者 刘方方 黄暕 项勇 《计算机技术与发展》 2023年第10期195-201,共7页
订单信息贯穿于物流供应链的所有环节,高效的订单处理是保障物流服务质量和运营效率的关键。面对日益增长的差异化客户物流订单,人工对订单分类费时、低效,难以满足现代物流要求的效率标准。为了提升物流订单分类的性能,该文提出了一种... 订单信息贯穿于物流供应链的所有环节,高效的订单处理是保障物流服务质量和运营效率的关键。面对日益增长的差异化客户物流订单,人工对订单分类费时、低效,难以满足现代物流要求的效率标准。为了提升物流订单分类的性能,该文提出了一种基于图卷积神经网络(graph convolution network,GCN)和RoBERTa预训练语言模型的订单分类方法。首先,基于物流订单文本的抽象语义表示(abstract meaning representation,AMR)结果和关键词构建全局AMR图,并使用图卷积神经网络对全局AMR图进行特征提取,获取订单文本的全局AMR图表示向量;其次,基于AMR算法构建物流订单文本分句的局部AMR图集合,然后使用堆叠GCN处理图集合得到订单文本局部AMR图表示向量;再次,使用RoBERTa模型处理物流订单文本,得到文本语义表示向量;最后,融合三种类型的文本表示向量完成物流订单分类。实验结果表明:该方法在多项评价指标上优于其他基线方法。消融实验结果也验证了该分类方法各模块的有效性。 展开更多
关键词 订单分类 图卷积神经网络 抽象语义表示 roberta模型 特征提取
下载PDF
基于RoBERTa与字词融合的电子病历命名实体识别方法研究 被引量:1
17
作者 王卫东 张志峰 +1 位作者 徐金慧 杨习贝 《江苏科技大学学报(自然科学版)》 CAS 北大核心 2023年第2期47-52,共6页
为了提高所抽取电子病历文本中语义信息的准确性,提出基于RoBERTa与字词融合的电子病历命名实体识别算法.采用预训练模型RoBERTa得到充分考虑上下文信息的字向量;然后对文本进行分词处理,再通过Word2Vec得到词向量;最后将两者进行融合... 为了提高所抽取电子病历文本中语义信息的准确性,提出基于RoBERTa与字词融合的电子病历命名实体识别算法.采用预训练模型RoBERTa得到充分考虑上下文信息的字向量;然后对文本进行分词处理,再通过Word2Vec得到词向量;最后将两者进行融合传入双向长短记忆神经网络BiLSTM中进行训练,经过条件随机场CRF进行预测输出.在电子病历数据集上进行的对比实验表明,在采用3个评价指标的情况下,文中算法均明显优于经典的电子病历命名实体识别方法. 展开更多
关键词 电子病历命名实体识别 预训练模型roberta 双向长短记忆神经网络 条件随机场 字词融合
下载PDF
Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model
18
作者 Guangxia Xu Lei Liu Jingnan Dong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期903-922,共20页
In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model ... In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model from zero is very high,and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present.In this paper,we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained languagemodel BERT and connected to a bidirectional gate recurrent unitmodel.The downstream neural network adopts the bidirectional gate recurrent unit neural network model with a hierarchical attention mechanism to mine more semantic features contained in the source code of smart contracts by using their characteristics.Our experiments show that our proposed hybrid neural network model SolBERT-BiGRU-Attention is fitted by a large number of data samples with smart contract vulnerabilities,and it is found that compared with the existing methods,the accuracy of our model can reach 93.85%,and the Micro-F1 Score is 94.02%. 展开更多
关键词 Smart contract pre-trained language model deep learning recurrent neural network blockchain security
下载PDF
Robust Deep Learning Model for Black Fungus Detection Based on Gabor Filter and Transfer Learning
19
作者 Esraa Hassan Fatma M.Talaat +4 位作者 Samah Adel Samir Abdelrazek Ahsan Aziz Yunyoung Nam Nora El-Rashidy 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1507-1525,共19页
Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have bee... Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907. 展开更多
关键词 Black fungus COVID-19 Transfer learning pre-trained models medical image
下载PDF
A PERT-BiLSTM-Att Model for Online Public Opinion Text Sentiment Analysis
20
作者 Mingyong Li Zheng Jiang +1 位作者 Zongwei Zhao Longfei Ma 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2387-2406,共20页
As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality p... As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality portraits under massive public opinion data.The traditional sentiment analysis model is not sensitive to the location information of words,it is difficult to solve the problem of polysemy,and the learning representation ability of long and short sentences is very different,which leads to the low accuracy of sentiment classification.This paper proposes a sentiment analysis model PERT-BiLSTM-Att for public opinion text based on the pre-training model of the disordered language model,bidirectional long-term and short-term memory network and attention mechanism.The model first uses the PERT model pre-trained from the lexical location information of a large amount of corpus to process the text data and obtain the dynamic feature representation of the text.Then the semantic features are input into BiLSTM to learn context sequence information and enhance the model’s ability to represent long sequences.Finally,the attention mechanism is used to focus on the words that contribute more to the overall emotional tendency to make up for the lack of short text representation ability of the traditional model,and then the classification results are output through the fully connected network.The experimental results show that the classification accuracy of the model on NLPCC14 and weibo_senti_100k public data sets reach 88.56%and 97.05%,respectively,and the accuracy reaches 95.95%on the data set MDC22 composed of Meituan,Dianping and Ctrip comment.It proves that the model has a good effect on sentiment analysis of online public opinion texts on different platforms.The experimental results on different datasets verify the model’s effectiveness in applying sentiment analysis of texts.At the same time,the model has a strong generalization ability and can achieve good results for sentiment analysis of datasets in different fields. 展开更多
关键词 Natural language processing PERT pre-training model emotional analysis BiLSTM
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部