期刊文献+
共找到393篇文章
< 1 2 20 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
2
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
3
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
4
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Incorporating Linguistic Rules in Statistical Chinese Language Model for Pinyin-to-character Conversion 被引量:2
5
作者 刘秉权 Wang +2 位作者 Xiaolong Wang Yuying 《High Technology Letters》 EI CAS 2001年第2期8-13,共6页
An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods ... An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods such as MI-based rule evaluating, weighted rule quantification and element-based n-gram probability approximation are presented. Dynamic Viterbi algorithm is adopted to search the best path in lattice. To strengthen the model, transformation-based error-driven rules learning is adopted. Applying proposed model to Chinese Pinyin-to-character conversion, high performance has been achieved in accuracy, flexibility and robustness simultaneously. Tests show correct rate achieves 94.81% instead of 90.53% using bi-gram Markov model alone. Many long-distance dependency and recursion in language can be processed effectively. 展开更多
关键词 chinese Pinyin-to-character conversion Rule-based language model N-gram language model Hybrid language model Element lattice Transformation-based error-driven learning
下载PDF
Language Interaction and the Influence of the Chinese Language
6
作者 He Xiaoyong 《学术界》 CSSCI 北大核心 2015年第9期303-307,共5页
It is necessary that each era has its own value orientation of the language civilization,which forms the motivation,model and pragmatic hypothesis.As for the Chinese language,it is the Chinese philosophical semantics.... It is necessary that each era has its own value orientation of the language civilization,which forms the motivation,model and pragmatic hypothesis.As for the Chinese language,it is the Chinese philosophical semantics.Language and civilization integrate into each other,so when analyzing linguistics,we shall separate the lexeme on the basis of context.Pragmatics establishes the context for the Chinese language,the basis of which is boosting the education of the Chinese language.As result,pragmatics is not only a branch of linguistics but also the construction of popular linguistics.The Chinese language course is only the concrete implementation of pragmatics and the basic project for the language context in a popular and globalizational style. 展开更多
关键词 语言学 相互作用 汉语 中国哲学 语文教育 语用学 语义学 上下文
下载PDF
Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models
7
作者 Li WEIGANG Mayara Chew MARINHO +1 位作者 Denise Leyi LI Vitor Vasconcelos DE OLIVEIRA 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期84-105,共22页
While large language models(LLMs)have made significant strides in natural language processing(NLP),they continue to face challenges in adequately addressing the intricacies of the Chinese language in certain scenarios... While large language models(LLMs)have made significant strides in natural language processing(NLP),they continue to face challenges in adequately addressing the intricacies of the Chinese language in certain scenarios.We propose a framework called Six-Writings multimodal processing(SWMP)to enable direct integration of Chinese NLP(CNLP)with morphological and semantic elements.The first part of SWMP,known as Six-Writings pictophonetic coding(SWPC),is introduced with a suitable level of granularity for radicals and components,enabling effective representation of Chinese characters and words.We conduct several experimental scenarios,including the following:(1)We establish an experimental database consisting of images and SWPC for Chinese characters,enabling dual-mode processing and matrix generation for CNLP.(2)We characterize various generative modes of Chinese words,such as thousands of Chinese idioms,used as question-and-answer(Q&A)prompt functions,facilitating analogies by SWPC.The experiments achieve 100%accuracy in answering all questions in the Chinese morphological data set(CA8-Mor-10177).(3)A fine-tuning mechanism is proposed to refine word embedding results using SWPC,resulting in an average relative error of≤25%for 39.37%of the questions in the Chinese wOrd Similarity data set(COS960).The results demonstrate that SWMP/SWPC methods effectively capture the distinctive features of Chinese and offer a promising mechanism to enhance CNLP with better efficiency. 展开更多
关键词 chinese language model chinese natural language processing(CNLP) Generative language model Multimodal processing Six-Writings
原文传递
"Chinese Character- Chinese Language" the Regional Linguistic System. Developing Process and Current Situation
8
作者 Zhang Lu 《学术界》 CSSCI 北大核心 2015年第2期303-307,共5页
Linguistic civilization orientation is a necessity for all eras,with which the motivation,viable model and pragmatic hypothesis of the language can be formed."Chinese character— Chinese language"takes on di... Linguistic civilization orientation is a necessity for all eras,with which the motivation,viable model and pragmatic hypothesis of the language can be formed."Chinese character— Chinese language"takes on different cultures in different regions, embodying the integration of language and civilization. As a result, in order to analyze linguistics, it is necessary for us to compare the Chinese language at home and abroad, and it is the linguistic construction based on the public. The modern Chinese language inherits the cultural connotation of the Chinese linguistics and answers the regional hypothesis and establishes the new linguistic system for the Chinese language in different regions. 展开更多
关键词 语言系统 中国 语文 文化内涵 语言学 国内外 字符
下载PDF
Chinese Language Teacher Professional Growth: A Case Study
9
作者 Yufei Guo Wenjing Wang Huiwen Li 《汉语教学方法与技术》 2019年第2期65-76,共12页
Chinese language teachers grow with certain characteristics in their professional development.Knowing these characteristics can reveal a teacher’s developmental needs which can inform the teachers and the teacher dev... Chinese language teachers grow with certain characteristics in their professional development.Knowing these characteristics can reveal a teacher’s developmental needs which can inform the teachers and the teacher development facilitators.This case study examines the professional development of one Chinese language teacher that works in a high school in the United States.The Five-Stage Theory is employed to direct the examination of the teacher’s growing path.Findings cover the challenges,efforts to cope with the changes,successes,and failures. 展开更多
关键词 Five-stage model professional development chinese language teacher development characteristics
下载PDF
A Case Study on the Pattern of Classroom Discourse in Teaching Chinese as a Foreign Language
10
作者 Liqing Li 《Journal of Contemporary Educational Research》 2021年第10期172-177,共6页
Based on the IRF(initiation,response,and feedback)classroom discourse structure model proposed by Sinclair and Coulthard,this research analyzes and studies the actual corpus of Chinese classroom teaching in Thailand,f... Based on the IRF(initiation,response,and feedback)classroom discourse structure model proposed by Sinclair and Coulthard,this research analyzes and studies the actual corpus of Chinese classroom teaching in Thailand,focusing on the structural model of teacher-student communication discourse,mainly from two aspects of teachers'feedback.On the one hand,it investigates whether IRF is fully applicable to Chinese classroom teaching and whether there are special situations to it.On the other hand,it attempts to summarize the discourse structure model of Chinese classroom teaching and explores the application of the research results in helping Chinese teachers improve their teaching quality in hope that constructive suggestions can be proposed for teaching Chinese as a foreign language. 展开更多
关键词 IRF model Classroom discourse structure Teachers'feedback chinese as a foreign language classroom
下载PDF
Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement
11
作者 Chaojie Wen Xudong Jia Tao Chen 《Data Intelligence》 EI 2023年第4期962-989,共28页
Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventio... Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE. 展开更多
关键词 chinese open relation extraction pre-trained language model Knowledge enhancement
原文传递
A Bit Progress on Word-Based Language Model
12
作者 陈勇 陈国评 《Journal of Shanghai University(English Edition)》 CAS 2003年第2期148-155,共8页
A good language model is essential to a postprocessing algorithm for recognition systems. In the past, researchers have presented various language models, such as character based language models, word based language m... A good language model is essential to a postprocessing algorithm for recognition systems. In the past, researchers have presented various language models, such as character based language models, word based language model, syntactical rules language model, hybrid models, etc . The word N gram model is by far an effective and efficient model, but one has to address the problem of data sparseness in establishing the model. Katz and Kneser et al. respectively presented effective remedies to solve this challenging problem. In this study, we proposed an improvement to their methods by incorporating Chinese language specific information or Chinese word class information into the system. 展开更多
关键词 language model pattern recognition chinese character recognition.
下载PDF
Topic Model Based Text Similarity Measure for Chinese Judgment Document
13
作者 Yue Wang Jidong Ge +5 位作者 Yemao Zhou Yi Feng Chuanyi Li ZhongjinLi Xiaoyu Zhou Bin Luo 《国际计算机前沿大会会议论文集》 2017年第2期9-11,共3页
In the recent informatization of Chinese courts, the huge amount of law cases and judgment documents, which were digital stored,has provided a good foundation for the research of judicial big data and machine learning... In the recent informatization of Chinese courts, the huge amount of law cases and judgment documents, which were digital stored,has provided a good foundation for the research of judicial big data and machine learning. In this situation, some ideas about Chinese courts can reach automation or get better result through the research of machine learning, such as similar documents recommendation, workload evaluation based on similarity of judgement documents and prediction of possible relevant statutes. In trying to achieve all above mentioned, and also in face of the characteristics of Chinese judgement document, we propose a topic model based approach to measure the text similarity of Chinese judgement document, which is based on TF-IDF, Latent Dirichlet Allocation (LDA), Labeled Latent Dirichlet Allocation (LLDA) and other treatments. Combining with the characteristics of Chinese judgment document,we focus on the specific steps of approach, the preprocessing of corpus, the parameters choices of training and the evaluation of similarity measure result. Besides, implementing the approach for prediction of possible statutes and regarding the prediction accuracy as the evaluation metric, we designed experiments to demonstrate the reasonability of decisions in the process of design and the high performance of our approach on text similarity measure. The experiments also show the restriction of our approach which need to be focused in future work. 展开更多
关键词 chinese JUDGMENT documents Data science Machine learning Natural language processing Text similarity TF-IDF TOPIC model LATENT DIRICHLET ALLOCATION Labeled LATENT DIRICHLET ALLOCATION
下载PDF
The Life Cycle of Knowledge in Big Language Models:A Survey
14
作者 Boxi Cao Hongyu Lin +1 位作者 Xianpei Han Le Sun 《Machine Intelligence Research》 EI CSCD 2024年第2期217-238,共22页
Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and... Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions. 展开更多
关键词 pre-trained language model knowledge acquisition knowledge representation knowledge probing knowledge editing knowledge application
原文传递
Automatic Keyphrase Extraction from Scientific Chinese Medical Abstracts Based on Character-Level Sequence Labeling 被引量:3
15
作者 Liangping Ding Zhixiong Zhang +2 位作者 Huan Liu Jie Li GaihongYu 《Journal of Data and Information Science》 CSCD 2021年第3期35-57,共23页
Purpose:Automatic keyphrase extraction(AKE)is an important task for grasping the main points of the text.In this paper,we aim to combine the benefits of sequence labeling formulation and pretrained language model to p... Purpose:Automatic keyphrase extraction(AKE)is an important task for grasping the main points of the text.In this paper,we aim to combine the benefits of sequence labeling formulation and pretrained language model to propose an automatic keyphrase extraction model for Chinese scientific research.Design/methodology/approach:We regard AKE from Chinese text as a character-level sequence labeling task to avoid segmentation errors of Chinese tokenizer and initialize our model with pretrained language model BERT,which was released by Google in 2018.We collect data from Chinese Science Citation Database and construct a large-scale dataset from medical domain,which contains 100,000 abstracts as training set,6,000 abstracts as development set and 3,094 abstracts as test set.We use unsupervised keyphrase extraction methods including term frequency(TF),TF-IDF,TextRank and supervised machine learning methods including Conditional Random Field(CRF),Bidirectional Long Short Term Memory Network(BiLSTM),and BiLSTM-CRF as baselines.Experiments are designed to compare word-level and character-level sequence labeling approaches on supervised machine learning models and BERT-based models.Findings:Compared with character-level BiLSTM-CRF,the best baseline model with F1 score of 50.16%,our character-level sequence labeling model based on BERT obtains F1 score of 59.80%,getting 9.64%absolute improvement.Research limitations:We just consider automatic keyphrase extraction task rather than keyphrase generation task,so only keyphrases that are occurred in the given text can be extracted.In addition,our proposed dataset is not suitable for dealing with nested keyphrases.Practical implications:We make our character-level IOB format dataset of Chinese Automatic Keyphrase Extraction from scientific Chinese medical abstracts(CAKE)publicly available for the benefits of research community,which is available at:https://github.com/possible1402/Dataset-For-Chinese-Medical-Keyphrase-Extraction.Originality/value:By designing comparative experiments,our study demonstrates that character-level formulation is more suitable for Chinese automatic keyphrase extraction task under the general trend of pretrained language models.And our proposed dataset provides a unified method for model evaluation and can promote the development of Chinese automatic keyphrase extraction to some extent. 展开更多
关键词 Automatic keyphrase extraction Character-level sequence labeling Pretrained language model Scientific chinese medical abstracts
下载PDF
EVA2.0:Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training
16
作者 Yuxian Gu Jiaxin Wen +8 位作者 Hao Sun Yi Song Pei Ke Chujie Zheng Zheng Zhang Jianzhu Yao Lei Liu Xiaoyan Zhu Minlie Huang 《Machine Intelligence Research》 EI CSCD 2023年第2期207-219,共13页
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue ... Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue model,ignoring the discussion of some key factors towards a powerful human-like chatbot,especially in Chinese scenarios.In this paper,we conduct extensive experiments to investigate these under-explored factors,including data quality control,model architecture designs,training approaches,and decoding strategies.We propose EVA2.0,a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters,and will make our models and codes publicly available.Automatic and human evaluations show that EVA2.0 significantly outperforms other open-source counterparts.We also discuss the limitations of this work by presenting some failure cases and pose some future research directions on large-scale Chinese open-domain dialogue systems. 展开更多
关键词 Natural language processing deep learning(DL) large-scale pre-training dialogue systems chinese open-domain conversational model
原文传递
Unsupervised statistical text simplification using pre-trained language modeling for initialization
17
作者 Jipeng QIANG Feng ZHANG +3 位作者 Yun LI Yunhao YUAN Yi ZHU Xindong WU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期81-90,共10页
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based mach... Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines. 展开更多
关键词 text simplification pre-trained language modeling BERT word embeddings
原文传递
Evaluation on ChatGPT for Chinese Language Understanding
18
作者 Linhan Li Huaping Zhang +2 位作者 Chunjin Li Haowen You Wenyao Cui 《Data Intelligence》 EI 2023年第4期885-903,共19页
ChatGPT has attracted extension attention of academia and industry.This paper aims to evaluate ChatGPT in Chinese language understanding capability on 6 tasks using 11 datasets.Experiments indicate that ChatGPT achiev... ChatGPT has attracted extension attention of academia and industry.This paper aims to evaluate ChatGPT in Chinese language understanding capability on 6 tasks using 11 datasets.Experiments indicate that ChatGPT achieved competitive results in sentiment analysis,summary,and reading comprehension in Chinese,while it is prone to factual errors in closed-book QA.Further,on two more difficult Chinese understanding tasks,that is,idiom fill-in-the-blank and cants understanding,we found that a simple chain-of-thought prompt can improve the accuracy of ChatGPT in complex reasoning.This paper further analyses the possible risks of using ChatGPT based on the results.Finally,we briefly describe the research and development progress of our ChatBIT. 展开更多
关键词 language model ChatGPT ChatBIT chinese language Understanding Artificial intelligence
原文传递
Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model
19
作者 Guangxia Xu Lei Liu Jingnan Dong 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期903-922,共20页
In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model ... In recent years,with the great success of pre-trained language models,the pre-trained BERT model has been gradually applied to the field of source code understanding.However,the time cost of training a language model from zero is very high,and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present.In this paper,we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained languagemodel BERT and connected to a bidirectional gate recurrent unitmodel.The downstream neural network adopts the bidirectional gate recurrent unit neural network model with a hierarchical attention mechanism to mine more semantic features contained in the source code of smart contracts by using their characteristics.Our experiments show that our proposed hybrid neural network model SolBERT-BiGRU-Attention is fitted by a large number of data samples with smart contract vulnerabilities,and it is found that compared with the existing methods,the accuracy of our model can reach 93.85%,and the Micro-F1 Score is 94.02%. 展开更多
关键词 Smart contract pre-trained language model deep learning recurrent neural network blockchain security
下载PDF
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
20
作者 Liqiang Jing Yiren Li +3 位作者 Junhao Xu Yongcan Yu Pei Shen Xuemeng Song 《Machine Intelligence Research》 EI CSCD 2023年第2期289-298,共10页
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM... Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines. 展开更多
关键词 Multimodal sentence summarization(MMSS) generative pre-trained language model(GPLM) natural language generation deep learning artificial intelligence
原文传递
上一页 1 2 20 下一页 到第
使用帮助 返回顶部