期刊文献+
共找到433篇文章
< 1 2 22 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
2
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
3
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
4
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
5
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Incorporating Linguistic Rules in Statistical Chinese Language Model for Pinyin-to-character Conversion 被引量:2
6
作者 刘秉权 Wang +2 位作者 Xiaolong Wang Yuying 《High Technology Letters》 EI CAS 2001年第2期8-13,共6页
An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods ... An N-gram Chinese language model incorporating linguistic rules is presented. By constructing elements lattice, rules information is incorporated in statistical frame. To facilitate the hybrid modeling, novel methods such as MI-based rule evaluating, weighted rule quantification and element-based n-gram probability approximation are presented. Dynamic Viterbi algorithm is adopted to search the best path in lattice. To strengthen the model, transformation-based error-driven rules learning is adopted. Applying proposed model to Chinese Pinyin-to-character conversion, high performance has been achieved in accuracy, flexibility and robustness simultaneously. Tests show correct rate achieves 94.81% instead of 90.53% using bi-gram Markov model alone. Many long-distance dependency and recursion in language can be processed effectively. 展开更多
关键词 chinese Pinyin-to-character conversion Rule-based language model N-gram language model Hybrid language model Element lattice Transformation-based error-driven learning
下载PDF
Language Interaction and the Influence of the Chinese Language
7
作者 He Xiaoyong 《学术界》 CSSCI 北大核心 2015年第9期303-307,共5页
It is necessary that each era has its own value orientation of the language civilization,which forms the motivation,model and pragmatic hypothesis.As for the Chinese language,it is the Chinese philosophical semantics.... It is necessary that each era has its own value orientation of the language civilization,which forms the motivation,model and pragmatic hypothesis.As for the Chinese language,it is the Chinese philosophical semantics.Language and civilization integrate into each other,so when analyzing linguistics,we shall separate the lexeme on the basis of context.Pragmatics establishes the context for the Chinese language,the basis of which is boosting the education of the Chinese language.As result,pragmatics is not only a branch of linguistics but also the construction of popular linguistics.The Chinese language course is only the concrete implementation of pragmatics and the basic project for the language context in a popular and globalizational style. 展开更多
关键词 语言学 相互作用 汉语 中国哲学 语文教育 语用学 语义学 上下文
下载PDF
Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models
8
作者 Li WEIGANG Mayara Chew MARINHO +1 位作者 Denise Leyi LI Vitor Vasconcelos DE OLIVEIRA 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期84-105,共22页
While large language models(LLMs)have made significant strides in natural language processing(NLP),they continue to face challenges in adequately addressing the intricacies of the Chinese language in certain scenarios... While large language models(LLMs)have made significant strides in natural language processing(NLP),they continue to face challenges in adequately addressing the intricacies of the Chinese language in certain scenarios.We propose a framework called Six-Writings multimodal processing(SWMP)to enable direct integration of Chinese NLP(CNLP)with morphological and semantic elements.The first part of SWMP,known as Six-Writings pictophonetic coding(SWPC),is introduced with a suitable level of granularity for radicals and components,enabling effective representation of Chinese characters and words.We conduct several experimental scenarios,including the following:(1)We establish an experimental database consisting of images and SWPC for Chinese characters,enabling dual-mode processing and matrix generation for CNLP.(2)We characterize various generative modes of Chinese words,such as thousands of Chinese idioms,used as question-and-answer(Q&A)prompt functions,facilitating analogies by SWPC.The experiments achieve 100%accuracy in answering all questions in the Chinese morphological data set(CA8-Mor-10177).(3)A fine-tuning mechanism is proposed to refine word embedding results using SWPC,resulting in an average relative error of≤25%for 39.37%of the questions in the Chinese wOrd Similarity data set(COS960).The results demonstrate that SWMP/SWPC methods effectively capture the distinctive features of Chinese and offer a promising mechanism to enhance CNLP with better efficiency. 展开更多
关键词 chinese language model chinese natural language processing(CNLP) Generative language model Multimodal processing Six-Writings
原文传递
"Chinese Character- Chinese Language" the Regional Linguistic System. Developing Process and Current Situation
9
作者 Zhang Lu 《学术界》 CSSCI 北大核心 2015年第2期303-307,共5页
Linguistic civilization orientation is a necessity for all eras,with which the motivation,viable model and pragmatic hypothesis of the language can be formed."Chinese character— Chinese language"takes on di... Linguistic civilization orientation is a necessity for all eras,with which the motivation,viable model and pragmatic hypothesis of the language can be formed."Chinese character— Chinese language"takes on different cultures in different regions, embodying the integration of language and civilization. As a result, in order to analyze linguistics, it is necessary for us to compare the Chinese language at home and abroad, and it is the linguistic construction based on the public. The modern Chinese language inherits the cultural connotation of the Chinese linguistics and answers the regional hypothesis and establishes the new linguistic system for the Chinese language in different regions. 展开更多
关键词 语言系统 中国 语文 文化内涵 语言学 国内外 字符
下载PDF
Chinese Language Teacher Professional Growth: A Case Study
10
作者 Yufei Guo Wenjing Wang Huiwen Li 《汉语教学方法与技术》 2019年第2期65-76,共12页
Chinese language teachers grow with certain characteristics in their professional development.Knowing these characteristics can reveal a teacher’s developmental needs which can inform the teachers and the teacher dev... Chinese language teachers grow with certain characteristics in their professional development.Knowing these characteristics can reveal a teacher’s developmental needs which can inform the teachers and the teacher development facilitators.This case study examines the professional development of one Chinese language teacher that works in a high school in the United States.The Five-Stage Theory is employed to direct the examination of the teacher’s growing path.Findings cover the challenges,efforts to cope with the changes,successes,and failures. 展开更多
关键词 Five-stage model professional development chinese language teacher development characteristics
下载PDF
A Case Study on the Pattern of Classroom Discourse in Teaching Chinese as a Foreign Language
11
作者 Liqing Li 《Journal of Contemporary Educational Research》 2021年第10期172-177,共6页
Based on the IRF(initiation,response,and feedback)classroom discourse structure model proposed by Sinclair and Coulthard,this research analyzes and studies the actual corpus of Chinese classroom teaching in Thailand,f... Based on the IRF(initiation,response,and feedback)classroom discourse structure model proposed by Sinclair and Coulthard,this research analyzes and studies the actual corpus of Chinese classroom teaching in Thailand,focusing on the structural model of teacher-student communication discourse,mainly from two aspects of teachers'feedback.On the one hand,it investigates whether IRF is fully applicable to Chinese classroom teaching and whether there are special situations to it.On the other hand,it attempts to summarize the discourse structure model of Chinese classroom teaching and explores the application of the research results in helping Chinese teachers improve their teaching quality in hope that constructive suggestions can be proposed for teaching Chinese as a foreign language. 展开更多
关键词 IRF model Classroom discourse structure Teachers'feedback chinese as a foreign language classroom
下载PDF
Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement
12
作者 Chaojie Wen Xudong Jia Tao Chen 《Data Intelligence》 EI 2023年第4期962-989,共28页
Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventio... Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE. 展开更多
关键词 chinese open relation extraction pre-trained language model Knowledge enhancement
原文传递
Research status and application of artificial intelligence large models in the oil and gas industry
13
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
A Bit Progress on Word-Based Language Model
14
作者 陈勇 陈国评 《Journal of Shanghai University(English Edition)》 CAS 2003年第2期148-155,共8页
A good language model is essential to a postprocessing algorithm for recognition systems. In the past, researchers have presented various language models, such as character based language models, word based language m... A good language model is essential to a postprocessing algorithm for recognition systems. In the past, researchers have presented various language models, such as character based language models, word based language model, syntactical rules language model, hybrid models, etc . The word N gram model is by far an effective and efficient model, but one has to address the problem of data sparseness in establishing the model. Katz and Kneser et al. respectively presented effective remedies to solve this challenging problem. In this study, we proposed an improvement to their methods by incorporating Chinese language specific information or Chinese word class information into the system. 展开更多
关键词 language model pattern recognition chinese character recognition.
下载PDF
Topic Model Based Text Similarity Measure for Chinese Judgment Document
15
作者 Yue Wang Jidong Ge +5 位作者 Yemao Zhou Yi Feng Chuanyi Li ZhongjinLi Xiaoyu Zhou Bin Luo 《国际计算机前沿大会会议论文集》 2017年第2期9-11,共3页
In the recent informatization of Chinese courts, the huge amount of law cases and judgment documents, which were digital stored,has provided a good foundation for the research of judicial big data and machine learning... In the recent informatization of Chinese courts, the huge amount of law cases and judgment documents, which were digital stored,has provided a good foundation for the research of judicial big data and machine learning. In this situation, some ideas about Chinese courts can reach automation or get better result through the research of machine learning, such as similar documents recommendation, workload evaluation based on similarity of judgement documents and prediction of possible relevant statutes. In trying to achieve all above mentioned, and also in face of the characteristics of Chinese judgement document, we propose a topic model based approach to measure the text similarity of Chinese judgement document, which is based on TF-IDF, Latent Dirichlet Allocation (LDA), Labeled Latent Dirichlet Allocation (LLDA) and other treatments. Combining with the characteristics of Chinese judgment document,we focus on the specific steps of approach, the preprocessing of corpus, the parameters choices of training and the evaluation of similarity measure result. Besides, implementing the approach for prediction of possible statutes and regarding the prediction accuracy as the evaluation metric, we designed experiments to demonstrate the reasonability of decisions in the process of design and the high performance of our approach on text similarity measure. The experiments also show the restriction of our approach which need to be focused in future work. 展开更多
关键词 chinese JUDGMENT documents Data science Machine learning Natural language processing Text similarity TF-IDF TOPIC model LATENT DIRICHLET ALLOCATION Labeled LATENT DIRICHLET ALLOCATION
下载PDF
MOSS:An Open Conversational Large Language Model 被引量:1
16
作者 Tianxiang Sun Xiaotian Zhang +21 位作者 Zhengfu He Peng Li Qinyuan Cheng Xiangyang Liu Hang Yan Yunfan Shao Qiong Tang Shiduo Zhang Xingjian Zhao Ke Chen Yining Zheng Zhejian Zhou Ruixiao Li Jun Zhan Yunhua Zhou Linyang Li Xiaogui Yang Lingling Wu Zhangyue Yin Xuanjing Huang Yu-Gang Jiang Xipeng Qiu 《Machine Intelligence Research》 EI CSCD 2024年第5期888-905,共18页
Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of rese... Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of research,in this paper,we report the development of MOSS,an open-sourced conversational LLM that contains 16 B parameters and can perform a variety of instructions in multi-turn interactions with humans.The base model of MOSS is pre-trained on large-scale unlabeled English,Chinese,and code data.To optimize the model for dialogue,we generate 1.1 M synthetic conversations based on user prompts collected through our earlier versions of the model API.We then perform preference-aware training on preference data annotated from AI feedback.Evaluation results on real-world use cases and academic benchmarks demonstrate the effectiveness of the proposed approaches.In addition,we present an effective practice to augment MOSS with several external tools.Through the development of MOSS,we have established a complete technical roadmap for large language models from pre-training,supervised fine-tuning to alignment,verifying the feasibility of chatGPT under resource-limited conditions and providing a reference for both the academic and industrial communities.Model weights and code are publicly available at https://github.com/OpenMOSS/MOSS. 展开更多
关键词 Large language models natural language processing pre-training ALIGNMENT chatGPT MOSS
原文传递
The Life Cycle of Knowledge in Big Language Models:A Survey 被引量:1
17
作者 Boxi Cao Hongyu Lin +1 位作者 Xianpei Han Le Sun 《Machine Intelligence Research》 EI CSCD 2024年第2期217-238,共22页
Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and... Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions. 展开更多
关键词 pre-trained language model knowledge acquisition knowledge representation knowledge probing knowledge editing knowledge application
原文传递
Automatic Keyphrase Extraction from Scientific Chinese Medical Abstracts Based on Character-Level Sequence Labeling 被引量:4
18
作者 Liangping Ding Zhixiong Zhang +2 位作者 Huan Liu Jie Li GaihongYu 《Journal of Data and Information Science》 CSCD 2021年第3期35-57,共23页
Purpose:Automatic keyphrase extraction(AKE)is an important task for grasping the main points of the text.In this paper,we aim to combine the benefits of sequence labeling formulation and pretrained language model to p... Purpose:Automatic keyphrase extraction(AKE)is an important task for grasping the main points of the text.In this paper,we aim to combine the benefits of sequence labeling formulation and pretrained language model to propose an automatic keyphrase extraction model for Chinese scientific research.Design/methodology/approach:We regard AKE from Chinese text as a character-level sequence labeling task to avoid segmentation errors of Chinese tokenizer and initialize our model with pretrained language model BERT,which was released by Google in 2018.We collect data from Chinese Science Citation Database and construct a large-scale dataset from medical domain,which contains 100,000 abstracts as training set,6,000 abstracts as development set and 3,094 abstracts as test set.We use unsupervised keyphrase extraction methods including term frequency(TF),TF-IDF,TextRank and supervised machine learning methods including Conditional Random Field(CRF),Bidirectional Long Short Term Memory Network(BiLSTM),and BiLSTM-CRF as baselines.Experiments are designed to compare word-level and character-level sequence labeling approaches on supervised machine learning models and BERT-based models.Findings:Compared with character-level BiLSTM-CRF,the best baseline model with F1 score of 50.16%,our character-level sequence labeling model based on BERT obtains F1 score of 59.80%,getting 9.64%absolute improvement.Research limitations:We just consider automatic keyphrase extraction task rather than keyphrase generation task,so only keyphrases that are occurred in the given text can be extracted.In addition,our proposed dataset is not suitable for dealing with nested keyphrases.Practical implications:We make our character-level IOB format dataset of Chinese Automatic Keyphrase Extraction from scientific Chinese medical abstracts(CAKE)publicly available for the benefits of research community,which is available at:https://github.com/possible1402/Dataset-For-Chinese-Medical-Keyphrase-Extraction.Originality/value:By designing comparative experiments,our study demonstrates that character-level formulation is more suitable for Chinese automatic keyphrase extraction task under the general trend of pretrained language models.And our proposed dataset provides a unified method for model evaluation and can promote the development of Chinese automatic keyphrase extraction to some extent. 展开更多
关键词 Automatic keyphrase extraction Character-level sequence labeling Pretrained language model Scientific chinese medical abstracts
下载PDF
Evaluation on ChatGPT for Chinese Language Understanding 被引量:3
19
作者 Linhan Li Huaping Zhang +2 位作者 Chunjin Li Haowen You Wenyao Cui 《Data Intelligence》 EI 2023年第4期885-903,共19页
ChatGPT has attracted extension attention of academia and industry.This paper aims to evaluate ChatGPT in Chinese language understanding capability on 6 tasks using 11 datasets.Experiments indicate that ChatGPT achiev... ChatGPT has attracted extension attention of academia and industry.This paper aims to evaluate ChatGPT in Chinese language understanding capability on 6 tasks using 11 datasets.Experiments indicate that ChatGPT achieved competitive results in sentiment analysis,summary,and reading comprehension in Chinese,while it is prone to factual errors in closed-book QA.Further,on two more difficult Chinese understanding tasks,that is,idiom fill-in-the-blank and cants understanding,we found that a simple chain-of-thought prompt can improve the accuracy of ChatGPT in complex reasoning.This paper further analyses the possible risks of using ChatGPT based on the results.Finally,we briefly describe the research and development progress of our ChatBIT. 展开更多
关键词 language model ChatGPT ChatBIT chinese language Understanding Artificial intelligence
原文传递
EVA2.0:Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training 被引量:2
20
作者 Yuxian Gu Jiaxin Wen +8 位作者 Hao Sun Yi Song Pei Ke Chujie Zheng Zheng Zhang Jianzhu Yao Lei Liu Xiaoyan Zhu Minlie Huang 《Machine Intelligence Research》 EI CSCD 2023年第2期207-219,共13页
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue ... Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue model,ignoring the discussion of some key factors towards a powerful human-like chatbot,especially in Chinese scenarios.In this paper,we conduct extensive experiments to investigate these under-explored factors,including data quality control,model architecture designs,training approaches,and decoding strategies.We propose EVA2.0,a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters,and will make our models and codes publicly available.Automatic and human evaluations show that EVA2.0 significantly outperforms other open-source counterparts.We also discuss the limitations of this work by presenting some failure cases and pose some future research directions on large-scale Chinese open-domain dialogue systems. 展开更多
关键词 Natural language processing deep learning(DL) large-scale pre-training dialogue systems chinese open-domain conversational model
原文传递
上一页 1 2 22 下一页 到第
使用帮助 返回顶部