期刊文献+
共找到139篇文章
< 1 2 7 >
每页显示 20 50 100
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
1
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
2
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Research status and application of artificial intelligence large models in the oil and gas industry
3
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
GeoNER:Geological Named Entity Recognition with Enriched Domain Pre-Training Model and Adversarial Training
4
作者 MA Kai HU Xinxin +4 位作者 TIAN Miao TAN Yongjian ZHENG Shuai TAO Liufeng QIU Qinjun 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2024年第5期1404-1417,共14页
As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate unders... As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information. 展开更多
关键词 geological named entity recognition geological report adversarial training confrontation training global pointer pre-training model
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
5
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
6
作者 Zhixiang Ji Xiaohui Wang +1 位作者 Jie Zhang Di Wu 《Global Energy Interconnection》 EI CSCD 2023年第4期493-504,共12页
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power... With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid. 展开更多
关键词 Power-grid dispatch fault handling Knowledge graph pre-trained model Auxiliary decision-making
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
7
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
A Classification–Detection Approach of COVID-19 Based on Chest X-ray and CT by Using Keras Pre-Trained Deep Learning Models 被引量:10
8
作者 Xing Deng Haijian Shao +2 位作者 Liang Shi Xia Wang Tongling Xie 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第11期579-596,共18页
The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight agai... The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight against COVID-19,is to examine the patient’s lungs based on the Chest X-ray and CT generated by radiation imaging.In this paper,five keras-related deep learning models:ResNet50,InceptionResNetV2,Xception,transfer learning and pre-trained VGGNet16 is applied to formulate an classification-detection approaches of COVID-19.Two benchmark methods SVM(Support Vector Machine),CNN(Conventional Neural Networks)are provided to compare with the classification-detection approaches based on the performance indicators,i.e.,precision,recall,F1 scores,confusion matrix,classification accuracy and three types of AUC(Area Under Curve).The highest classification accuracy derived by classification-detection based on 5857 Chest X-rays and 767 Chest CTs are respectively 84%and 75%,which shows that the keras-related deep learning approaches facilitate accurate and effective COVID-19-assisted detection. 展开更多
关键词 COVID-19 detection deep learning transfer learning pre-trained models
下载PDF
y-Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning
9
作者 Yitao LIU Chenxin AN Xipeng QIU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期107-116,共10页
With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses o... With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph.In this paper,we propose y-Tuning,an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks.y-Tuning learns dense representations for labels y defined in a given task and aligns them to fixed feature representation.Without computing the gradients of text encoder at training phrase,y-Tuning is not only parameterefficient but also training-efficient.Experimental results show that for DeBERTaxxL with 1.6 billion parameters,y-Tuning achieves performance more than 96%of full fine-tuning on GLUE Benchmark with only 2%tunable parameters and much fewer training costs. 展开更多
关键词 pre-trained model lightweight fine-tuning paradigms label representation
原文传递
基于NLP的煤矿事故原因分类研究 被引量:5
10
作者 张江石 李泳暾 +3 位作者 冒香凝 胡馨月 潘雨 王梓伊 《中国安全科学学报》 CAS CSCD 北大核心 2023年第6期20-26,共7页
为有效提升分析和处理煤矿事故文本的效率,融合自然语言处理(NLP)技术与事故致因模型,构建一个自动化的事故原因分类框架。首先以事故致因“2-4”模型(24Model)为事故分类依据,分析87份煤矿事故调查报告,得到煤矿事故原因分类框架,构建... 为有效提升分析和处理煤矿事故文本的效率,融合自然语言处理(NLP)技术与事故致因模型,构建一个自动化的事故原因分类框架。首先以事故致因“2-4”模型(24Model)为事故分类依据,分析87份煤矿事故调查报告,得到煤矿事故原因分类框架,构建每类事故原因的语料库;然后利用NLP技术分别处理语料库中各类原因文本,将其用于训练fastText模型,自动识别事故原因文本并分类;最后对比分析fastText模型与TextCNN等其他3种经典模型的分类效果。结果表明:共得到21类事故原因和6684条训练语料,训练后的fastText模型对煤矿事故原因分类的识别正确率能够达到98.92%,综合性能优于其他3种分类模型。基于24Model和NLP技术开发的事故文本挖掘系统,能够快速分析处理事故文本信息,进一步细化事故调查报告中的原因,便于进行事故案例学习和统计分析。 展开更多
关键词 自然语言处理(nlp) 事故原因分类 “2-4”模型(24model) fastText 文本挖掘
下载PDF
Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey 被引量:9
11
作者 Xiao Wang Guangyao Chen +5 位作者 Guangwu Qian Pengcheng Gao Xiao-Yong Wei Yaowei Wang Yonghong Tian Wen Gao 《Machine Intelligence Research》 EI CSCD 2023年第4期447-482,共36页
With the urgent demand for generalized deep models,many pre-trained big models are proposed,such as bidirectional encoder representations(BERT),vision transformer(ViT),generative pre-trained transformers(GPT),etc.Insp... With the urgent demand for generalized deep models,many pre-trained big models are proposed,such as bidirectional encoder representations(BERT),vision transformer(ViT),generative pre-trained transformers(GPT),etc.Inspired by the success of these models in single domains(like computer vision and natural language processing),the multi-modal pre-trained big models have also drawn more and more attention in recent years.In this work,we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works.Specifically,we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning,pre-training works in natural language process,computer vision,and speech.Then,we introduce the task definition,key challenges,and advantages of multi-modal pre-training models(MM-PTMs),and discuss the MM-PTMs with a focus on data,objectives,network architectures,and knowledge enhanced pre-training.After that,we introduce the downstream tasks used for the validation of large-scale MM-PTMs,including generative,classification,and regression tasks.We also give visualization and analysis of the model parameters and results on representative downstream tasks.Finally,we point out possible research directions for this topic that may benefit future works.In addition,we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models:https://github.com/wangxiao5791509/MultiModal_BigModels_Survey. 展开更多
关键词 Multi-modal(MM) pre-trained model(PTM) information fusion representation learning deep learning
原文传递
AI背景下NLP模型在高校网络育人研究中的应用 被引量:2
12
作者 詹议 《科技创业月刊》 2023年第S01期138-141,共4页
随着人工智能技术快速发展,自然语言处理(NLP)模型在高校网络育人研究中发挥着越来越重要的作用。探讨AI背景下NLP模型在构建高校网络育人研究中的应用路径,通过介绍NLP模型和网络育人的概念及发展现状,分析NLP模型在高校网络育人中的... 随着人工智能技术快速发展,自然语言处理(NLP)模型在高校网络育人研究中发挥着越来越重要的作用。探讨AI背景下NLP模型在构建高校网络育人研究中的应用路径,通过介绍NLP模型和网络育人的概念及发展现状,分析NLP模型在高校网络育人中的应用场景与优势。探讨NLP模型在高校网络育人中的路径以及数据采集、模型训练、结果反馈等环节。并通过案例分析,验证了NLP模型在高校网络育人中的有效性和可行性。 展开更多
关键词 人工智能 nlp模型 网络育人
下载PDF
基于NLP模型的智能IVR语音呼叫系统设计 被引量:1
13
作者 韦国惠 王利超 +2 位作者 钟世文 黄绪荣 李姗珊 《单片机与嵌入式系统应用》 2023年第10期69-73,共5页
设计了一种基于NLP模型的智能IVR语音呼叫系统。在硬件方面,完成了无线呼叫器和显示终端设计。在软件方面,首先建立了多信道协议通信模式,运用优先级理念实现呼入语音接纳控制,然后搭建了IVR交互序列,将通信语音转入序列中,通过语音平... 设计了一种基于NLP模型的智能IVR语音呼叫系统。在硬件方面,完成了无线呼叫器和显示终端设计。在软件方面,首先建立了多信道协议通信模式,运用优先级理念实现呼入语音接纳控制,然后搭建了IVR交互序列,将通信语音转入序列中,通过语音平台、解析器、网页服务器进行交互处理。最后以引入对抗训练算法的NLP模型为核心设计了一种可用于智能呼叫问答的预训练模型,在交互过程中不断学习,搜索出与用户呼叫语音相对应的处理方案,并通过语音播报、提示灯闪烁等方式提醒工作人员处理语音呼叫问题。系统测试结果表明,在不同的呼叫距离下,系统语音呼叫响应延时始终低于1 ms,满足呼叫响应的实时要求。 展开更多
关键词 nlp模型 交互式语音响应 语音呼叫
下载PDF
Red Alarm for Pre-trained Models:Universal Vulnerability to Neuron-level Backdoor Attacks
14
作者 Zhengyan Zhang Guangxuan Xiao +6 位作者 Yongwei Li Tian Lv Fanchao Qi Zhiyuan Liu Yasheng Wang Xin Jiang Maosong Sun 《Machine Intelligence Research》 EI CSCD 2023年第2期180-193,共14页
The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them... The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets,while the downloaded models may suffer backdoor attacks.Different from previous attacks aiming at a target task,we show that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information.Attackers can restrict the output representations(the values of output neurons)of trigger-embedded samples to arbitrary predefined values through additional training,namely neuron-level backdoor attack(NeuBA).Since fine-tuning has little effect on model parameters,the fine-tuned model will retain the backdoor functionality and predict a specific label for the samples embedded with the same trigger.To provoke multiple labels in a specific task,attackers can introduce several triggers with predefined contrastive values.In the experiments of both natural language processing(NLP)and computer vision(CV),we show that NeuBA can well control the predictions for trigger-embedded instances with different trigger designs.Our findings sound a red alarm for the wide use of pre-trained models.Finally,we apply several defense methods to NeuBA and find that model pruning is a promising technique to resist NeuBA by omitting backdoored neurons. 展开更多
关键词 pre-trained language models backdoor attacks transformers natural language processing(nlp) computer vision(CV)
原文传递
Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model
15
作者 Guangxia Xu Lei Liu Jingnan Dong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期903-922,共20页
In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model ... In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model from zero is very high,and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present.In this paper,we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained languagemodel BERT and connected to a bidirectional gate recurrent unitmodel.The downstream neural network adopts the bidirectional gate recurrent unit neural network model with a hierarchical attention mechanism to mine more semantic features contained in the source code of smart contracts by using their characteristics.Our experiments show that our proposed hybrid neural network model SolBERT-BiGRU-Attention is fitted by a large number of data samples with smart contract vulnerabilities,and it is found that compared with the existing methods,the accuracy of our model can reach 93.85%,and the Micro-F1 Score is 94.02%. 展开更多
关键词 Smart contract pre-trained language model deep learning recurrent neural network blockchain security
下载PDF
Robust Deep Learning Model for Black Fungus Detection Based on Gabor Filter and Transfer Learning
16
作者 Esraa Hassan Fatma M.Talaat +4 位作者 Samah Adel Samir Abdelrazek Ahsan Aziz Yunyoung Nam Nora El-Rashidy 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1507-1525,共19页
Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have bee... Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907. 展开更多
关键词 Black fungus COVID-19 Transfer learning pre-trained models medical image
下载PDF
A PERT-BiLSTM-Att Model for Online Public Opinion Text Sentiment Analysis
17
作者 Mingyong Li Zheng Jiang +1 位作者 Zongwei Zhao Longfei Ma 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2387-2406,共20页
As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality p... As an essential category of public event management and control,sentiment analysis of online public opinion text plays a vital role in public opinion early warning,network rumor management,and netizens’person-ality portraits under massive public opinion data.The traditional sentiment analysis model is not sensitive to the location information of words,it is difficult to solve the problem of polysemy,and the learning representation ability of long and short sentences is very different,which leads to the low accuracy of sentiment classification.This paper proposes a sentiment analysis model PERT-BiLSTM-Att for public opinion text based on the pre-training model of the disordered language model,bidirectional long-term and short-term memory network and attention mechanism.The model first uses the PERT model pre-trained from the lexical location information of a large amount of corpus to process the text data and obtain the dynamic feature representation of the text.Then the semantic features are input into BiLSTM to learn context sequence information and enhance the model’s ability to represent long sequences.Finally,the attention mechanism is used to focus on the words that contribute more to the overall emotional tendency to make up for the lack of short text representation ability of the traditional model,and then the classification results are output through the fully connected network.The experimental results show that the classification accuracy of the model on NLPCC14 and weibo_senti_100k public data sets reach 88.56%and 97.05%,respectively,and the accuracy reaches 95.95%on the data set MDC22 composed of Meituan,Dianping and Ctrip comment.It proves that the model has a good effect on sentiment analysis of online public opinion texts on different platforms.The experimental results on different datasets verify the model’s effectiveness in applying sentiment analysis of texts.At the same time,the model has a strong generalization ability and can achieve good results for sentiment analysis of datasets in different fields. 展开更多
关键词 Natural language processing PERT pre-training model emotional analysis BiLSTM
下载PDF
Intelligent Deep Convolutional Neural Network Based Object DetectionModel for Visually Challenged People
18
作者 S.Kiruthika Devi Amani Abdulrahman Albraikan +3 位作者 Fahd N.Al-Wesabi Mohamed K.Nour Ahmed Ashour Anwer Mustafa Hilal 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3191-3207,共17页
Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,fo... Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,for example,handlingmultiple class images,images that get augmented when captured by a camera and so on.The test images include all these variants as well.These detection models alert them about their surroundings when they want to walk independently.This study compares four CNN-based pre-trainedmodels:ResidualNetwork(ResNet-50),Inception v3,DenseConvolutional Network(DenseNet-121),and SqueezeNet,predominantly used in image recognition applications.Based on the analysis performed on these test images,the study infers that Inception V3 outperformed other pre-trained models in terms of accuracy and speed.To further improve the performance of the Inception v3 model,the thermal exchange optimization(TEO)algorithm is applied to tune the hyperparameters(number of epochs,batch size,and learning rate)showing the novelty of the work.Better accuracy was achieved owing to the inclusion of an auxiliary classifier as a regularizer,hyperparameter optimizer,and factorization approach.Additionally,Inception V3 can handle images of different sizes.This makes Inception V3 the optimum model for assisting visually challenged people in real-world communication when integrated with Internet of Things(IoT)-based devices. 展开更多
关键词 pre-trained models object detection visually challenged people deep learning Inception V3 DenseNet-121
下载PDF
Efficient Grad-Cam-Based Model for COVID-19 Classification and Detection
19
作者 Saleh Albahli Ghulam Nabi Ahmad Hassan Yar 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2743-2757,共15页
Corona Virus(COVID-19)is a novel virus that crossed an animal-human barrier and emerged in Wuhan,China.Until now it has affected more than 119 million people.Detection of COVID-19 is a critical task and due to a large... Corona Virus(COVID-19)is a novel virus that crossed an animal-human barrier and emerged in Wuhan,China.Until now it has affected more than 119 million people.Detection of COVID-19 is a critical task and due to a large number of patients,a shortage of doctors has occurred for its detection.In this paper,a model has been suggested that not only detects the COVID-19 using X-ray and CT-Scan images but also shows the affected areas.Three classes have been defined;COVID-19,normal,and Pneumonia for X-ray images.For CT-Scan images,2 classes have been defined COVID-19 and non-COVID-19.For classi-fication purposes,pretrained models like ResNet50,VGG-16,and VGG19 have been used with some tuning.For detecting the affected areas Gradient-weighted Class Activation Mapping(GradCam)has been used.As the X-rays and ct images are taken at different intensities,so the contrast limited adaptive histogram equalization(CLAHE)has been applied to see the effect on the training of the models.As a result of these experiments,we achieved a maximum validation accuracy of 88.10%with a training accuracy of 88.48%for CT-Scan images using the ResNet50 model.While for X-ray images we achieved a maximum validation accuracy of 97.31%with a training accuracy of 95.64%using the VGG16 model. 展开更多
关键词 Convolutional neural networks(CNN) COVID-19 pre-trained models CLAHE Grad-Cam X-RAY data augmentation
下载PDF
基于图神经网络的法律文本共指消解模型
20
作者 刘冬 张晓 《无线电通信技术》 北大核心 2024年第3期587-596,共10页
共指消解是确定上下文中的代词或名词短语所指的具体对象或实体,是自然语言处理(Natural Language Processing,NLP)的基本任务之一,对理解文本语义具有重要意义。现有的方法主要集中在一般领域的代词、所有格和名词短语的解析上,针对法... 共指消解是确定上下文中的代词或名词短语所指的具体对象或实体,是自然语言处理(Natural Language Processing,NLP)的基本任务之一,对理解文本语义具有重要意义。现有的方法主要集中在一般领域的代词、所有格和名词短语的解析上,针对法律领域的研究较少。为了更好地学习法律文本中的知识,并消除共同指代现象,提出一种基于图神经网络的法律文本共指消解模型(Graph Neural Network for Coreference Resolution,CR-GNN)。所提CR-GNN可以促进法律文本挖掘中的一系列后续任务。利用预训练语言模型和双向门控循环单元(Bidirectional Gate Recurrent Unit,BiGRU)对法律文本进行编码;使用基于元任务的动态图卷积网络(Meta Dynamic Graph Convolutional Network,MDGCN)整合实体之间的引用关系;使用前馈神经网络(Feed-Forward Neural Network,FFNN)和Biaffine模型为候选对进行加权评估。CR-GNN可以有效识别实体之间的引用关系,并对实体依赖关系进行建模。在法庭记录文件数据集上进行大量实验,结果表明所提CR-GNN模型达到89.76%的F1分数,均高于现有基准模型。 展开更多
关键词 自然语言处理 共指消解 法律文本 预训练语言模型 图神经网络
下载PDF
上一页 1 2 7 下一页 到第
使用帮助 返回顶部