期刊文献+
共找到33篇文章
< 1 2 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
2
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
3
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
4
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
PAL-BERT:An Improved Question Answering Model
5
作者 Wenfeng Zheng Siyu Lu +3 位作者 Zhuohang Cai Ruiyang Wang Lei Wang Lirong Yin 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2729-2745,共17页
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput... In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance. 展开更多
关键词 PAL-BERT question answering model pretraining language models albert pruning model network pruning TextCNN BiLSTM
下载PDF
Research status and application of artificial intelligence large models in the oil and gas industry
6
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
Unsupervised statistical text simplification using pre-trained language modeling for initialization 被引量:1
7
作者 Jipeng QIANG Feng ZHANG +3 位作者 Yun LI Yunhao YUAN Yi ZHU Xindong WU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期81-90,共10页
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based mach... Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines. 展开更多
关键词 text simplification pre-trained language modeling BERT word embeddings
原文传递
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
8
作者 Liqiang Jing Yiren Li +3 位作者 Junhao Xu Yongcan Yu Pei Shen Xuemeng Song 《Machine Intelligence Research》 EI CSCD 2023年第2期289-298,共10页
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM... Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines. 展开更多
关键词 Multimodal sentence summarization(MMSS) generative pre-trained language model(GPLM) natural language generation deep learning artificial intelligence
原文传递
Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement
9
作者 Chaojie Wen Xudong Jia Tao Chen 《Data Intelligence》 EI 2023年第4期962-989,共28页
Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventio... Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE. 展开更多
关键词 Chinese open relation extraction pre-trained language model Knowledge enhancement
原文传递
MOSS:An Open Conversational Large Language Model 被引量:1
10
作者 Tianxiang Sun Xiaotian Zhang +21 位作者 Zhengfu He Peng Li Qinyuan Cheng Xiangyang Liu Hang Yan Yunfan Shao Qiong Tang Shiduo Zhang Xingjian Zhao Ke Chen Yining Zheng Zhejian Zhou Ruixiao Li Jun Zhan Yunhua Zhou Linyang Li Xiaogui Yang Lingling Wu Zhangyue Yin Xuanjing Huang Yu-Gang Jiang Xipeng Qiu 《Machine Intelligence Research》 EI CSCD 2024年第5期888-905,共18页
Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of rese... Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of research,in this paper,we report the development of MOSS,an open-sourced conversational LLM that contains 16 B parameters and can perform a variety of instructions in multi-turn interactions with humans.The base model of MOSS is pre-trained on large-scale unlabeled English,Chinese,and code data.To optimize the model for dialogue,we generate 1.1 M synthetic conversations based on user prompts collected through our earlier versions of the model API.We then perform preference-aware training on preference data annotated from AI feedback.Evaluation results on real-world use cases and academic benchmarks demonstrate the effectiveness of the proposed approaches.In addition,we present an effective practice to augment MOSS with several external tools.Through the development of MOSS,we have established a complete technical roadmap for large language models from pre-training,supervised fine-tuning to alignment,verifying the feasibility of chatGPT under resource-limited conditions and providing a reference for both the academic and industrial communities.Model weights and code are publicly available at https://github.com/OpenMOSS/MOSS. 展开更多
关键词 Large language models natural language processing pre-training ALIGNMENT chatGPT MOSS
原文传递
The Life Cycle of Knowledge in Big Language Models:A Survey 被引量:1
11
作者 Boxi Cao Hongyu Lin +1 位作者 Xianpei Han Le Sun 《Machine Intelligence Research》 EI CSCD 2024年第2期217-238,共22页
Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and... Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions. 展开更多
关键词 pre-trained language model knowledge acquisition knowledge representation knowledge probing knowledge editing knowledge application
原文传递
Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model
12
作者 Guangxia Xu Lei Liu Jingnan Dong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期903-922,共20页
In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model ... In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model from zero is very high,and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present.In this paper,we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained languagemodel BERT and connected to a bidirectional gate recurrent unitmodel.The downstream neural network adopts the bidirectional gate recurrent unit neural network model with a hierarchical attention mechanism to mine more semantic features contained in the source code of smart contracts by using their characteristics.Our experiments show that our proposed hybrid neural network model SolBERT-BiGRU-Attention is fitted by a large number of data samples with smart contract vulnerabilities,and it is found that compared with the existing methods,the accuracy of our model can reach 93.85%,and the Micro-F1 Score is 94.02%. 展开更多
关键词 Smart contract pre-trained language model deep learning recurrent neural network blockchain security
下载PDF
A PERT-BiLSTM-Att Model for Online Public Opinion Text Sentiment Analysis
13
作者 Mingyong Li Zheng Jiang +1 位作者 Zongwei Zhao Longfei Ma 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2387-2406,共20页
As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality p... As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality portraits under massive public opinion data.The traditional sentiment analysis model is not sensitive to the location information of words,it is difficult to solve the problem of polysemy,and the learning representation ability of long and short sentences is very different,which leads to the low accuracy of sentiment classification.This paper proposes a sentiment analysis model PERT-BiLSTM-Att for public opinion text based on the pre-training model of the disordered language model,bidirectional long-term and short-term memory network and attention mechanism.The model first uses the PERT model pre-trained from the lexical location information of a large amount of corpus to process the text data and obtain the dynamic feature representation of the text.Then the semantic features are input into BiLSTM to learn context sequence information and enhance the model’s ability to represent long sequences.Finally,the attention mechanism is used to focus on the words that contribute more to the overall emotional tendency to make up for the lack of short text representation ability of the traditional model,and then the classification results are output through the fully connected network.The experimental results show that the classification accuracy of the model on NLPCC14 and weibo_senti_100k public data sets reach 88.56%and 97.05%,respectively,and the accuracy reaches 95.95%on the data set MDC22 composed of Meituan,Dianping and Ctrip comment.It proves that the model has a good effect on sentiment analysis of online public opinion texts on different platforms.The experimental results on different datasets verify the model’s effectiveness in applying sentiment analysis of texts.At the same time,the model has a strong generalization ability and can achieve good results for sentiment analysis of datasets in different fields. 展开更多
关键词 Natural language processing PERT pre-training model emotional analysis BiLSTM
下载PDF
Pre-trained models for natural language processing: A survey 被引量:155
14
作者 QIU XiPeng SUN TianXiang +3 位作者 XU YiGe SHAO YunFan DAI Ning HUANG XuanJing 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2020年第10期1872-1897,共26页
Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language rep... Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next,we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks. 展开更多
关键词 deep learning neural network natural language processing pre-trained model distributed representation word embedding self-supervised learning language modelling
原文传递
Satellite and instrument entity recognition using a pre-trained language model with distant supervision
15
作者 Ming Lin Meng Jin +1 位作者 Yufu Liu Yuqi Bai 《International Journal of Digital Earth》 SCIE EI 2022年第1期1290-1304,共15页
Earth observations,especially satellite data,have produced a wealth of methods and results in meeting global challenges,often presented in unstructured texts such as papers or reports.Accurate extraction of satellite ... Earth observations,especially satellite data,have produced a wealth of methods and results in meeting global challenges,often presented in unstructured texts such as papers or reports.Accurate extraction of satellite and instrument entities from these unstructured texts can help to link and reuse Earth observation resources.The direct use of an existing dictionary to extract satellite and instrument entities suffers from the problem of poor matching,which leads to low recall.In this study,we present a named entity recognition model to automatically extract satellite and instrument entities from unstructured texts.Due to the lack of manually labeled data,we apply distant supervision to automatically generate labeled training data.Accordingly,we fine-tune the pre-trained language model with early stopping and a weighted cross-entropy loss function.We propose the dictionary-based self-training method to correct the incomplete annotations caused by the distant supervision method.Experiments demonstrate that our method achieves significant improvements in both precision and recall compared to dictionary matching or standard adaptation of pre-trained language models. 展开更多
关键词 Earth observation named entity recognition pre-trained language model distant supervision dictionary-based self-training
原文传递
May ChatGPT be a tool producing medical information for common inflammatory bowel disease patients’questions?An evidencecontrolled analysis 被引量:2
16
作者 Antonietta Gerarda Gravina Raffaele Pellegrino +6 位作者 Marina Cipullo Giovanna Palladino Giuseppe Imperio Andrea Ventura Salvatore Auletta Paola Ciamarra Alessandro Federico 《World Journal of Gastroenterology》 SCIE CAS 2024年第1期17-33,共17页
Artificial intelligence is increasingly entering everyday healthcare.Large language model(LLM)systems such as Chat Generative Pre-trained Transformer(ChatGPT)have become potentially accessible to everyone,including pa... Artificial intelligence is increasingly entering everyday healthcare.Large language model(LLM)systems such as Chat Generative Pre-trained Transformer(ChatGPT)have become potentially accessible to everyone,including patients with inflammatory bowel diseases(IBD).However,significant ethical issues and pitfalls exist in innovative LLM tools.The hype generated by such systems may lead to unweighted patient trust in these systems.Therefore,it is necessary to understand whether LLMs(trendy ones,such as ChatGPT)can produce plausible medical information(MI)for patients.This review examined ChatGPT’s potential to provide MI regarding questions commonly addressed by patients with IBD to their gastroenterologists.From the review of the outputs provided by ChatGPT,this tool showed some attractive potential while having significant limitations in updating and detailing information and providing inaccurate information in some cases.Further studies and refinement of the ChatGPT,possibly aligning the outputs with the leading medical evidence provided by reliable databases,are needed. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat Generative pre-trained Transformer Large language model Artificial intelligence
下载PDF
Medical Named Entity Recognition from Un-labelled Medical Records based on Pre-trained Language Models and Domain Dictionary
17
作者 Chaojie Wen Tao Chen +1 位作者 Xudong Jia Jiang Zhu 《Data Intelligence》 2021年第3期402-417,共16页
Medical named entity recognition(NER)is an area in which medical named entities are recognized from medical texts,such as diseases,drugs,surgery reports,anatomical parts,and examination documents.Conventional medical ... Medical named entity recognition(NER)is an area in which medical named entities are recognized from medical texts,such as diseases,drugs,surgery reports,anatomical parts,and examination documents.Conventional medical NER methods do not make full use of un-labelled medical texts embedded in medical documents.To address this issue,we proposed a medical NER approach based on pre-trained language models and a domain dictionary.First,we constructed a medical entity dictionary by extracting medical entities from labelled medical texts and collecting medical entities from other resources,such as the YiduN4 K data set.Second,we employed this dictionary to train domain-specific pre-trained language models using un-labelled medical texts.Third,we employed a pseudo labelling mechanism in un-labelled medical texts to automatically annotate texts and create pseudo labels.Fourth,the BiLSTM-CRF sequence tagging model was used to fine-tune the pre-trained language models.Our experiments on the un-labelled medical texts,which were extracted from Chinese electronic medical records,show that the proposed NER approach enables the strict and relaxed F1 scores to be 88.7%and 95.3%,respectively. 展开更多
关键词 Medical named entity recognition pre-trained language model Domain dictionary Pseudo labelling Un-labelled medical data
原文传递
基于ALBERT-UniLM模型的文本自动摘要技术研究 被引量:5
18
作者 孙宝山 谭浩 《计算机工程与应用》 CSCD 北大核心 2022年第15期184-190,共7页
任务中的生成式摘要模型对原文理解不充分且容易生成重复文本等问题,提出将词向量模型ALBERT与统一预训练模型UniLM相结合的算法,构造出一种ALBERT-UniLM摘要生成模型。该模型采用预训练动态词向量ALBERT替代传统的BERT基准模型进行特... 任务中的生成式摘要模型对原文理解不充分且容易生成重复文本等问题,提出将词向量模型ALBERT与统一预训练模型UniLM相结合的算法,构造出一种ALBERT-UniLM摘要生成模型。该模型采用预训练动态词向量ALBERT替代传统的BERT基准模型进行特征提取获得词向量。利用融合指针网络的UniLM语言模型对下游生成任务微调,结合覆盖机制来降低重复词的生成并获取摘要文本。实验以ROUGE评测值作为评价指标,在2018年CCF国际自然语言处理与中文计算会议(NLPC-C2018)单文档中文新闻摘要评价数据集上进行验证。与BERT基准模型相比,ALBERT-UniLM模型的Rouge-1、Rouge-2和Rouge-L指标分别提升了1.57%、1.37%和1.60%。实验结果表明,提出的ALBERT-UniLM模型在文本摘要任务上效果明显优于其他基准模型,能够有效提高文本摘要的生成质量。 展开更多
关键词 自然语言处理 预训练语言模型 albert模型 UniLM模型 生成式摘要
下载PDF
Personality Trait Detection via Transfer Learning
19
作者 Bashar Alshouha Jesus Serrano-Guerrero +2 位作者 Francisco Chiclana Francisco P.Romero Jose A.Olivas 《Computers, Materials & Continua》 SCIE EI 2024年第2期1933-1956,共24页
Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains,including education,e-commerce,or human resources.Tra-... Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains,including education,e-commerce,or human resources.Tra-ditional machine learning techniques have been broadly employed for personality trait identification;nevertheless,the development of new technologies based on deep learning has led to new opportunities to improve their performance.This study focuses on the capabilities of pre-trained language models such as BERT,RoBERTa,ALBERT,ELECTRA,ERNIE,or XLNet,to deal with the task of personality recognition.These models are able to capture structural features from textual content and comprehend a multitude of language facets and complex features such as hierarchical relationships or long-term dependencies.This makes them suitable to classify multi-label personality traits from reviews while mitigating computational costs.The focus of this approach centers on developing an architecture based on different layers able to capture the semantic context and structural features from texts.Moreover,it is able to fine-tune the previous models using the MyPersonality dataset,which comprises 9,917 status updates contributed by 250 Facebook users.These status updates are categorized according to the well-known Big Five personality model,setting the stage for a comprehensive exploration of personality traits.To test the proposal,a set of experiments have been performed using different metrics such as the exact match ratio,hamming loss,zero-one-loss,precision,recall,F1-score,and weighted averages.The results reveal ERNIE is the top-performing model,achieving an exact match ratio of 72.32%,an accuracy rate of 87.17%,and 84.41%of F1-score.The findings demonstrate that the tested models substantially outperform other state-of-the-art studies,enhancing the accuracy by at least 3%and confirming them as powerful tools for personality recognition.These findings represent substantial advancements in personality recognition,making them appropriate for the development of user-centric applications. 展开更多
关键词 Personality trait detection pre-trained language model big five model transfer learning
下载PDF
Red Alarm for Pre-trained Models:Universal Vulnerability to Neuron-level Backdoor Attacks
20
作者 Zhengyan Zhang Guangxuan Xiao +6 位作者 Yongwei Li Tian Lv Fanchao Qi Zhiyuan Liu Yasheng Wang Xin Jiang Maosong Sun 《Machine Intelligence Research》 EI CSCD 2023年第2期180-193,共14页
The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them... The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets,while the downloaded models may suffer backdoor attacks.Different from previous attacks aiming at a target task,we show that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information.Attackers can restrict the output representations(the values of output neurons)of trigger-embedded samples to arbitrary predefined values through additional training,namely neuron-level backdoor attack(NeuBA).Since fine-tuning has little effect on model parameters,the fine-tuned model will retain the backdoor functionality and predict a specific label for the samples embedded with the same trigger.To provoke multiple labels in a specific task,attackers can introduce several triggers with predefined contrastive values.In the experiments of both natural language processing(NLP)and computer vision(CV),we show that NeuBA can well control the predictions for trigger-embedded instances with different trigger designs.Our findings sound a red alarm for the wide use of pre-trained models.Finally,we apply several defense methods to NeuBA and find that model pruning is a promising technique to resist NeuBA by omitting backdoored neurons. 展开更多
关键词 pre-trained language models backdoor attacks transformers natural language processing(NLP) computer vision(CV)
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部