期刊文献+
共找到465篇文章
< 1 2 24 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized bert pretraining approach sentence classification transformer models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
2
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
A Classification–Detection Approach of COVID-19 Based on Chest X-ray and CT by Using Keras Pre-Trained Deep Learning Models 被引量:10
3
作者 Xing Deng Haijian Shao +2 位作者 Liang Shi Xia Wang Tongling Xie 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第11期579-596,共18页
The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight agai... The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight against COVID-19,is to examine the patient’s lungs based on the Chest X-ray and CT generated by radiation imaging.In this paper,five keras-related deep learning models:ResNet50,InceptionResNetV2,Xception,transfer learning and pre-trained VGGNet16 is applied to formulate an classification-detection approaches of COVID-19.Two benchmark methods SVM(Support Vector Machine),CNN(Conventional Neural Networks)are provided to compare with the classification-detection approaches based on the performance indicators,i.e.,precision,recall,F1 scores,confusion matrix,classification accuracy and three types of AUC(Area Under Curve).The highest classification accuracy derived by classification-detection based on 5857 Chest X-rays and 767 Chest CTs are respectively 84%and 75%,which shows that the keras-related deep learning approaches facilitate accurate and effective COVID-19-assisted detection. 展开更多
关键词 COVID-19 detection deep learning transfer learning pre-trained models
下载PDF
GeoNER:Geological Named Entity Recognition with Enriched Domain Pre-Training Model and Adversarial Training
4
作者 MA Kai HU Xinxin +4 位作者 TIAN Miao TAN Yongjian ZHENG Shuai TAO Liufeng QIU Qinjun 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2024年第5期1404-1417,共14页
As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate unders... As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information. 展开更多
关键词 geological named entity recognition geological report adversarial training confrontation training global pointer pre-training model
下载PDF
DPAL-BERT:A Faster and Lighter Question Answering Model
5
作者 Lirong Yin Lei Wang +8 位作者 Zhuohang Cai Siyu Lu Ruiyang Wang Ahmed AlSanad Salman A.AlQahtani Xiaobing Chen Zhengtong Yin Xiaolu Li Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期771-786,共16页
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the ... Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency. 展开更多
关键词 DPAL-bert question answering systems knowledge distillation model compression bert Bi-directional long short-term memory(BiLSTM) knowledge information transfer PAL-bert training efficiency natural language processing
下载PDF
Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
6
作者 Zhixiang Ji Xiaohui Wang +1 位作者 Jie Zhang Di Wu 《Global Energy Interconnection》 EI CSCD 2023年第4期493-504,共12页
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power... With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid. 展开更多
关键词 Power-grid dispatch fault handling Knowledge graph pre-trained model Auxiliary decision-making
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
7
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
基于BERT-BiLSTM的油田安全生产隐患文本分类
8
作者 陈晨 石赫 +1 位作者 徐悦 张新梅 《科学技术与工程》 北大核心 2024年第29期12650-12657,共8页
事故隐患分类能够直观反映企业安全生产管理的薄弱点,同时将直接决定企业优化安全管理工作的方向。油田安全生产过程中,隐患种类多,数据量大,单纯依赖人工方式分类及管理效率较低,且难以发掘数据中蕴含的潜在规律。基于油田安全生产的... 事故隐患分类能够直观反映企业安全生产管理的薄弱点,同时将直接决定企业优化安全管理工作的方向。油田安全生产过程中,隐患种类多,数据量大,单纯依赖人工方式分类及管理效率较低,且难以发掘数据中蕴含的潜在规律。基于油田安全生产的需求及事故隐患特征,提出了一种基于BERT-BiLSTM的分类模型,用于油田安全生产隐患文本的主题自动分类,通过基于Transformer的双向编码器表示(bidirectionalencoder representations from Transformer,BERT)模型提取输入文本的字符级特征,生成全局文本信息的向量表示,再通过双向长短时记忆网络(bi-directional long short-term memory,BiLSTM)模型对局部关键信息和上下文深层次特征进行特征提取,进而通过Softmax激活函数进行概率计算得到分类结果。通过与传统分类方法的比较表明,BERT-BiLSTM分类模型在加权平均准确率、加权平均召回率和加权平均F_(1)等指标方面均有所改善,模型与油田企业现有安全管理信息系统的有机融合将为进一步提升油田企业的事故隐患管理针对性,推动企业安全管理从事后被动反应向事前主动预防转变提供重要的技术支撑。 展开更多
关键词 隐患管理 油田安全生产 文本分类 bert模型 BiLSTM模型
下载PDF
基于BERT模型的空管危险源文本数据挖掘
9
作者 杨昌其 姜美岑 林灵 《航空计算技术》 2024年第4期89-93,共5页
由于危险源与安全隐患在民航安全管理工作中容易出现概念混淆和记录混乱的情况,根据双重预防机制管理规定,需要将两者区分开来。通过在ASIS系统上采集得到空管危险源控制清单作为研究对象,并对其进行相应的文本数据挖掘工作。根据危险... 由于危险源与安全隐患在民航安全管理工作中容易出现概念混淆和记录混乱的情况,根据双重预防机制管理规定,需要将两者区分开来。通过在ASIS系统上采集得到空管危险源控制清单作为研究对象,并对其进行相应的文本数据挖掘工作。根据危险源与安全隐患特点构建相应的文本分类模型:首先通过文本清洗、去停用词、Jieba分词等对空管危险源控制清单进行预处理,然后基于BERT模型生成词向量,采用BERT-Base-Chinese预训练模型进行预训练,并对模型进行微调超参数,最后结合Softmax分类器得到分类结果。 展开更多
关键词 文本分类 数据挖掘 bert模型 危险源 安全隐患
下载PDF
基于BERT古文预训练模型的实体关系联合抽取
10
作者 李智杰 杨盛杰 +3 位作者 李昌华 张颉 董玮 介军 《计算机系统应用》 2024年第8期187-195,共9页
古汉语文本承载着丰富的历史和文化信息,对这类文本进行实体关系抽取研究并构建相关知识图谱对于文化传承具有重要作用.针对古汉语文本中存在大量生僻汉字、语义模糊和复义等问题,提出了一种基于BERT古文预训练模型的实体关系联合抽取模... 古汉语文本承载着丰富的历史和文化信息,对这类文本进行实体关系抽取研究并构建相关知识图谱对于文化传承具有重要作用.针对古汉语文本中存在大量生僻汉字、语义模糊和复义等问题,提出了一种基于BERT古文预训练模型的实体关系联合抽取模型(entity relation joint extraction model based on BERT-ancient-Chinese pretrained model,JEBAC).首先,通过融合BiLSTM神经网络和注意力机制的BERT古文预训练模型(BERT-ancientChinese pre-trained model integrated BiLSTM neural network and attention mechanism,BACBA),识别出句中所有的subject实体和object实体,为关系和object实体联合抽取提供依据.接下来,将subject实体的归一化编码向量与整个句子的嵌入向量相加,以更好地理解句中subject实体的语义特征;最后,结合带有subject实体特征的句子向量和object实体的提示信息,通过BACBA实现句中关系和object实体的联合抽取,从而得到句中所有的三元组信息(subject实体,关系,object实体).在中文实体关系抽取DuIE2.0数据集和CCKS 2021的文言文实体关系抽取CCLUE小样本数据集上,与现有的方法进行了性能比较.实验结果表明,该方法在抽取性能上更加有效,F1值分别可达79.2%和55.5%. 展开更多
关键词 古汉语文本 实体关系抽取 bert古文预训练模型 BiLSTM 注意力 三元组信息
下载PDF
基于BERT实现基础医学专业术语智能提取系统
11
作者 李冬梅 朱朝阳 +4 位作者 李丽 邹玲 危晓莉 陈张一 彭慧琴 《基础医学教育》 2024年第11期1002-1007,共6页
在生成式人工智能的推动下,因材施教的个性化学习是现代教育的必然趋势。基于知识图谱的个性化学习路径是目前普遍采用的方式。在知识图谱的构建中,对专业术语的精准提取是最基础的工作,但仅靠人工完成,存在工作量大、易遗漏、不能及时... 在生成式人工智能的推动下,因材施教的个性化学习是现代教育的必然趋势。基于知识图谱的个性化学习路径是目前普遍采用的方式。在知识图谱的构建中,对专业术语的精准提取是最基础的工作,但仅靠人工完成,存在工作量大、易遗漏、不能及时更新的问题。文章通过自行设计标注的数据集medBaseDt,在开源预训练大模型BERT的基础上进行微调,训练完成termBERT模型,并设计开发了基础医学专业术语智能提取系统。该系统在组织学与胚胎学和病理学等教材中进行推理应用,专业术语提取准确率达到94.5±1.16%,取得了非常好的效果。通过该系统,教师能快速获取指定教材内容的专业词汇,快速完成知识图谱的设计。同时,该项技术也为后续研发AI智能构建知识图谱、智能生成试题和个性化学习打下了扎实的基础。 展开更多
关键词 基础医学 教学改革 人工智能 大语言模型 bert 微调
下载PDF
Masked Sentence Model Based on BERT for Move Recognition in Medical Scientific Abstracts 被引量:20
12
作者 Gaihong Yu Zhixiong Zhang +1 位作者 Huan Liu Liangping Ding 《Journal of Data and Information Science》 CSCD 2019年第4期42-55,共14页
Purpose:Mo ve recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language units.To improve the performance of move recognition in scientific abstracts,... Purpose:Mo ve recognition in scientific abstracts is an NLP task of classifying sentences of the abstracts into different types of language units.To improve the performance of move recognition in scientific abstracts,a novel model of move recognition is proposed that outperforms the BERT-based method.Design/methodology/approach:Prevalent models based on BERT for sentence classification often classify sentences without considering the context of the sentences.In this paper,inspired by the BERT masked language model(MLM),we propose a novel model called the masked sentence model that integrates the content and contextual information of the sentences in move recognition.Experiments are conducted on the benchmark dataset PubMed 20K RCT in three steps.Then,we compare our model with HSLN-RNN,BERT-based and SciBERT using the same dataset.Findings:Compared with the BERT-based and SciBERT models,the F1 score of our model outperforms them by 4.96%and 4.34%,respectively,which shows the feasibility and effectiveness of the novel model and the result of our model comes closest to the state-of-theart results of HSLN-RNN at present.Research limitations:The sequential features of move labels are not considered,which might be one of the reasons why HSLN-RNN has better performance.Our model is restricted to dealing with biomedical English literature because we use a dataset from PubMed,which is a typical biomedical database,to fine-tune our model.Practical implications:The proposed model is better and simpler in identifying move structures in scientific abstracts and is worthy of text classification experiments for capturing contextual features of sentences.Originality/value:T he study proposes a masked sentence model based on BERT that considers the contextual features of the sentences in abstracts in a new way.The performance of this classification model is significantly improved by rebuilding the input layer without changing the structure of neural networks. 展开更多
关键词 Move recognition bert Masked sentence model Scientific abstracts
下载PDF
基于LDA-BERT相似性测度模型的文本主题演化研究 被引量:2
13
作者 海骏林峰 严素梅 +1 位作者 陈荣 李建霞 《图书馆工作与研究》 CSSCI 北大核心 2024年第1期72-79,共8页
文章针对LDA主题模型在提取文本主题时忽略文本语义关联的问题,提出基于LDA-BERT的相似性测度模型:首先,结合利用TF-IDF和TextRank方法提取文本特征词,利用LDA主题模型挖掘文本主题;其次,通过嵌入BERT模型,结合LDA主题模型构建的主题-... 文章针对LDA主题模型在提取文本主题时忽略文本语义关联的问题,提出基于LDA-BERT的相似性测度模型:首先,结合利用TF-IDF和TextRank方法提取文本特征词,利用LDA主题模型挖掘文本主题;其次,通过嵌入BERT模型,结合LDA主题模型构建的主题-主题词概率分布,从词粒度层面表示主题向量;最后,利用余弦相似度算法计算主题之间的相似度。在相似性测度模型基础上构建向量相似度指标分析文献研究主题之间的关联,并绘制主题演化知识图谱。通过智慧图书馆领域的实证研究发现,使用LDA-BERT模型计算出的主题相似度结果相较于LDA主题模型的计算结果更加准确,与实际情况更相符。 展开更多
关键词 相似性测度 LDA-bert模型 LDA模型 bert模型 主题演化
下载PDF
BTM-BERT模型在民航机务维修安全隐患自动分类中的应用
14
作者 陈芳 张亚博 《安全与环境学报》 CAS CSCD 北大核心 2024年第11期4366-4373,共8页
为界定民航机务维修安全隐患类别,实现安全隐患数据的自动分类,首先,利用构建的机务维修停用词库对安全隐患记录语料进行预处理。其次,运用词对主题模型(Biterm Topic Model,BTM)提取主题和关键词,确定了“员工未按规定对工作现场进行... 为界定民航机务维修安全隐患类别,实现安全隐患数据的自动分类,首先,利用构建的机务维修停用词库对安全隐患记录语料进行预处理。其次,运用词对主题模型(Biterm Topic Model,BTM)提取主题和关键词,确定了“员工未按规定对工作现场进行监管”等12类安全隐患。最后,根据BTM主题模型标注的数据集对算法进行微调,构建了基于变换器的双向编码(Bidirectional Encoder Representations from Transformers,BERT)算法的机务维修安全隐患记录自动分类模型,并与传统的分类算法进行对比。结果表明:所构建的模型可以实现民航机务维修安全隐患自动分类,其效果远高于传统机器学习支持向量机算法的效果,构建的分类模型的精确率、召回率和F 1较文本卷积神经网络算法分别提升了0.12、0.14和0.14,总体准确率达到了93%。 展开更多
关键词 安全工程 机务维修 词对主题模型(BTM) 基于变换器的双向编码(bert) 安全隐患 文本分类
下载PDF
基于掩码语言模型的中文BERT攻击方法 被引量:1
15
作者 张云婷 叶麟 +2 位作者 唐浩林 张宏莉 李尚 《软件学报》 EI CSCD 北大核心 2024年第7期3392-3409,共18页
对抗文本是一种能够使深度学习分类器作出错误判断的恶意样本,敌手通过向原始文本中加入人类难以察觉的微小扰动制作出能欺骗目标模型的对抗文本.研究对抗文本生成方法,能对深度神经网络的鲁棒性进行评价,并助力于模型后续的鲁棒性提升... 对抗文本是一种能够使深度学习分类器作出错误判断的恶意样本,敌手通过向原始文本中加入人类难以察觉的微小扰动制作出能欺骗目标模型的对抗文本.研究对抗文本生成方法,能对深度神经网络的鲁棒性进行评价,并助力于模型后续的鲁棒性提升工作.当前针对中文文本设计的对抗文本生成方法中,很少有方法将鲁棒性较强的中文BERT模型作为目标模型进行攻击.面向中文文本分类任务,提出一种针对中文BERT的攻击方法Chinese BERT Tricker.该方法使用一种汉字级词语重要性打分方法——重要汉字定位法;同时基于掩码语言模型设计一种包含两类策略的适用于中文的词语级扰动方法实现对重要词语的替换.实验表明,针对文本分类任务,所提方法在两个真实数据集上均能使中文BERT模型的分类准确率大幅下降至40%以下,且其多种攻击性能明显强于其他基线方法. 展开更多
关键词 深度神经网络 对抗样本 文本对抗攻击 中文bert 掩码语言模型
下载PDF
基于BERT和TextCNN的智能制造成熟度评估方法 被引量:1
16
作者 张淦 袁堂晓 +1 位作者 汪惠芬 柳林燕 《计算机集成制造系统》 EI CSCD 北大核心 2024年第3期852-863,共12页
随着智能制造2025目标的临近,企业为了解自身能力水平纷纷加入到智能制造成熟度评估的行列中。然而,由于智能制造成熟度评估标准的复杂性,企业缺乏其对行业水平的了解,导致企业贸然申请,浪费自身时间的同时又占用大量评估资源。鉴于此,... 随着智能制造2025目标的临近,企业为了解自身能力水平纷纷加入到智能制造成熟度评估的行列中。然而,由于智能制造成熟度评估标准的复杂性,企业缺乏其对行业水平的了解,导致企业贸然申请,浪费自身时间的同时又占用大量评估资源。鉴于此,设计了一种新的评估流程,采用文本处理算法对整个评估过程进行了重构,通过利用国标文件中智能制造成熟度评估标准,将其作为训练集,采用基于预训练语言模型与文本神经网络(BERT+TextCNN)相结合的智能评估算法代替人工评估。在真实的企业智能制造数据集上的验证表明,当BERT+TextCNN评估模型在卷积核为[2,3,4]、迭代次数为6次、学习率为3e-5时,对智能制造成熟度进行评估,准确率达到85.32%。这表明所设计的评估方法能够较准确地帮助企业完成智能制造成熟度自评估,有助于企业了解自身智能制造能力水平,制定正确的发展方向。 展开更多
关键词 智能制造成熟度模型 bert预训练语言模型 文本卷积神经网络 评估过程重构
下载PDF
基于改进TF-IDF与BERT的领域情感词典构建方法 被引量:1
17
作者 蒋昊达 赵春蕾 +1 位作者 陈瀚 王春东 《计算机科学》 CSCD 北大核心 2024年第S01期150-158,共9页
领域情感词典的构建是领域文本情感分析的基础。现有的领域情感词典构建方法存在所筛选候选情感词冗余度高、情感极性判断失准、领域依赖性强等问题。为了提高所筛选候选情感词的领域性和判断领域情感词极性的准确程度,提出了一种基于... 领域情感词典的构建是领域文本情感分析的基础。现有的领域情感词典构建方法存在所筛选候选情感词冗余度高、情感极性判断失准、领域依赖性强等问题。为了提高所筛选候选情感词的领域性和判断领域情感词极性的准确程度,提出了一种基于改进词频-逆文档频率(TF-IDF)与BERT的领域情感词典构建方法。该方法在筛选领域候选情感词阶段对TF-IDF算法进行改进,将隐含狄利克雷分布(LDA)算法与改进后的TF-IDF算法结合,进行领域性修正,提升了所筛选候选情感词的领域性;在候选情感词极性判断阶段,将情感倾向点互信息算法(SO-PMI)与BERT结合,利用领域情感词微调BERT分类模型,提高了判断领域候选情感词情感极性的准确程度。在不同领域的用户评论数据集上进行实验,结果表明,该方法可以提高所构建领域情感词典的质量,使用该方法构建的领域情感词典用于汽车领域和手机领域文本情感分析的F1值分别达到78.02%和88.35%。 展开更多
关键词 情感分析 领域情感词典 词频-逆文档频率 隐含狄利克雷分布 情感倾向点互信息算法 bert模型
下载PDF
基于BERT和标签混淆的文本分类模型 被引量:1
18
作者 韩博 成卫青 《南京邮电大学学报(自然科学版)》 北大核心 2024年第3期100-108,共9页
目前,文本分类的研究主要集中在通过优化文本分类器来增强分类性能。然而,标签和文本之间的联系并没有得到很好的利用。尽管BERT对文本特征的处理表现出了非常好的效果,但对文本和标签的特征提取还有一定的提升空间。文中通过结合标签... 目前,文本分类的研究主要集中在通过优化文本分类器来增强分类性能。然而,标签和文本之间的联系并没有得到很好的利用。尽管BERT对文本特征的处理表现出了非常好的效果,但对文本和标签的特征提取还有一定的提升空间。文中通过结合标签混淆模型(Label Confusion Model,LCM),提出一种基于BERT和LCM的文本分类模型(Model Based on BERT and Label Confusion,BLC),对文本和标签的特征进一步做了处理。充分利用BERT每一层的句向量和最后一层的词向量,结合双向长短时记忆网络(Bi-LSTM)得到文本表示,来替代BERT原始的文本特征表示。标签在进入LCM之前,使用自注意力网络和Bi-LSTM提高标签之间相互依赖关系,从而提高最终的分类性能。在4个文本分类基准数据集上的实验结果证明了所提模型的有效性。 展开更多
关键词 文本分类 bert 标签混淆模型 双向长短时记忆网络 自注意力网络
下载PDF
Embedding Extraction for Arabic Text Using the AraBERT Model
19
作者 Amira Hamed Abo-Elghit Taher Hamza Aya Al-Zoghby 《Computers, Materials & Continua》 SCIE EI 2022年第7期1967-1994,共28页
Nowadays,we can use the multi-task learning approach to train a machine-learning algorithm to learn multiple related tasks instead of training it to solve a single task.In this work,we propose an algorithm for estimat... Nowadays,we can use the multi-task learning approach to train a machine-learning algorithm to learn multiple related tasks instead of training it to solve a single task.In this work,we propose an algorithm for estimating textual similarity scores and then use these scores in multiple tasks such as text ranking,essay grading,and question answering systems.We used several vectorization schemes to represent the Arabic texts in the SemEval2017-task3-subtask-D dataset.The used schemes include lexical-based similarity features,frequency-based features,and pre-trained model-based features.Also,we used contextual-based embedding models such as Arabic Bidirectional Encoder Representations from Transformers(AraBERT).We used the AraBERT model in two different variants.First,as a feature extractor in addition to the text vectorization schemes’features.We fed those features to various regression models to make a prediction value that represents the relevancy score between Arabic text units.Second,AraBERT is adopted as a pre-trained model,and its parameters are fine-tuned to estimate the relevancy scores between Arabic textual sentences.To evaluate the research results,we conducted several experiments to compare the use of the AraBERT model in its two variants.In terms of Mean Absolute Percentage Error(MAPE),the results showminor variance between AraBERT v0.2 as a feature extractor(21.7723)and the fine-tuned AraBERT v2(21.8211).On the other hand,AraBERT v0.2-Large as a feature extractor outperforms the finetuned AraBERT v2 model on the used data set in terms of the coefficient of determination(R2)values(0.014050,−0.032861),respectively. 展开更多
关键词 Semantic textual similarity arabic language EMBEDDINGS Arabert pre-trained models regression contextual-based models concurrency concept
下载PDF
Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model
20
作者 Guangxia Xu Lei Liu Jingnan Dong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期903-922,共20页
In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model ... In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model from zero is very high,and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present.In this paper,we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained languagemodel BERT and connected to a bidirectional gate recurrent unitmodel.The downstream neural network adopts the bidirectional gate recurrent unit neural network model with a hierarchical attention mechanism to mine more semantic features contained in the source code of smart contracts by using their characteristics.Our experiments show that our proposed hybrid neural network model SolBERT-BiGRU-Attention is fitted by a large number of data samples with smart contract vulnerabilities,and it is found that compared with the existing methods,the accuracy of our model can reach 93.85%,and the Micro-F1 Score is 94.02%. 展开更多
关键词 Smart contract pre-trained language model deep learning recurrent neural network blockchain security
下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部