期刊文献+
共找到2,608篇文章
< 1 2 131 >
每页显示 20 50 100
GeoNER:Geological Named Entity Recognition with Enriched Domain Pre-Training Model and Adversarial Training
1
作者 MA Kai HU Xinxin +4 位作者 TIAN Miao TAN Yongjian ZHENG Shuai TAO Liufeng QIU Qinjun 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2024年第5期1404-1417,共14页
As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate unders... As important geological data,a geological report contains rich expert and geological knowledge,but the challenge facing current research into geological knowledge extraction and mining is how to render accurate understanding of geological reports guided by domain knowledge.While generic named entity recognition models/tools can be utilized for the processing of geoscience reports/documents,their effectiveness is hampered by a dearth of domain-specific knowledge,which in turn leads to a pronounced decline in recognition accuracy.This study summarizes six types of typical geological entities,with reference to the ontological system of geological domains and builds a high quality corpus for the task of geological named entity recognition(GNER).In addition,Geo Wo BERT-adv BGP(Geological Word-base BERTadversarial training Bi-directional Long Short-Term Memory Global Pointer)is proposed to address the issues of ambiguity,diversity and nested entities for the geological entities.The model first uses the fine-tuned word granularitybased pre-training model Geo Wo BERT(Geological Word-base BERT)and combines the text features that are extracted using the Bi LSTM(Bi-directional Long Short-Term Memory),followed by an adversarial training algorithm to improve the robustness of the model and enhance its resistance to interference,the decoding finally being performed using a global association pointer algorithm.The experimental results show that the proposed model for the constructed dataset achieves high performance and is capable of mining the rich geological information. 展开更多
关键词 geological named entity recognition geological report adversarial training confrontation training global pointer pre-training model
下载PDF
A Modified CycleGAN for Multi-Organ Ultrasound Image Enhancement via Unpaired Pre-Training
2
作者 Haonan Han Bingyu Yang +2 位作者 Weihang Zhang Dongwei Li Huiqi Li 《Journal of Beijing Institute of Technology》 EI CAS 2024年第3期194-203,共10页
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual... Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices. 展开更多
关键词 ultrasound image enhancement handheld devices unpaired images pre-train and finetune cycleGAN
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
3
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
4
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
A Classification–Detection Approach of COVID-19 Based on Chest X-ray and CT by Using Keras Pre-Trained Deep Learning Models 被引量:10
5
作者 Xing Deng Haijian Shao +2 位作者 Liang Shi Xia Wang Tongling Xie 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第11期579-596,共18页
The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight agai... The Coronavirus Disease 2019(COVID-19)is wreaking havoc around the world,bring out that the enormous pressure on national health and medical staff systems.One of the most effective and critical steps in the fight against COVID-19,is to examine the patient’s lungs based on the Chest X-ray and CT generated by radiation imaging.In this paper,five keras-related deep learning models:ResNet50,InceptionResNetV2,Xception,transfer learning and pre-trained VGGNet16 is applied to formulate an classification-detection approaches of COVID-19.Two benchmark methods SVM(Support Vector Machine),CNN(Conventional Neural Networks)are provided to compare with the classification-detection approaches based on the performance indicators,i.e.,precision,recall,F1 scores,confusion matrix,classification accuracy and three types of AUC(Area Under Curve).The highest classification accuracy derived by classification-detection based on 5857 Chest X-rays and 767 Chest CTs are respectively 84%and 75%,which shows that the keras-related deep learning approaches facilitate accurate and effective COVID-19-assisted detection. 展开更多
关键词 COVID-19 detection deep learning transfer learning pre-trained models
下载PDF
Effective distributed convolutional neural network architecture for remote sensing images target classification with a pre-training approach 被引量:3
6
作者 LI Binquan HU Xiaohui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第2期238-244,共7页
How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classif... How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classification due to the powerful feature representation ability and better performance. However,the training and testing of CNN mainly rely on single machine.Single machine has its natural limitation and bottleneck in processing RSIs due to limited hardware resources and huge time consuming. Besides, overfitting is a challenge for the CNN model due to the unbalance between RSIs data and the model structure.When a model is complex or the training data is relatively small,overfitting occurs and leads to a poor predictive performance. To address these problems, a distributed CNN architecture for RSIs target classification is proposed, which dramatically increases the training speed of CNN and system scalability. It improves the storage ability and processing efficiency of RSIs. Furthermore,Bayesian regularization approach is utilized in order to initialize the weights of the CNN extractor, which increases the robustness and flexibility of the CNN model. It helps prevent the overfitting and avoid the local optima caused by limited RSI training images or the inappropriate CNN structure. In addition, considering the efficiency of the Na¨?ve Bayes classifier, a distributed Na¨?ve Bayes classifier is designed to reduce the training cost. Compared with other algorithms, the proposed system and method perform the best and increase the recognition accuracy. The results show that the distributed system framework and the proposed algorithms are suitable for RSIs target classification tasks. 展开更多
关键词 convolutional NEURAL network (CNN) DISTRIBUTED architecture REMOTE SENSING images (RSIs) TARGET classification pre-training
下载PDF
Pre-training Assessment Through the Web
7
作者 Kenneth Wong Reggie Kwan Jimmy SF Chan 《厦门大学学报(自然科学版)》 CAS CSCD 北大核心 2002年第S1期297-,共1页
Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time ... Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time and place, not to mention the cost involved in traveling, accommodation and training venue. However, in the most online training courses, all trainees are given same training materials and teaching paradigms. The problem of differentia ting the trainees’ abilities is the main concern. We need a pre-training test t o identify and classify of the weaknesses and strengths of differentiate trainee s so as to devise an appropriate training programs for the trainees. Adaptation of a Web-based Computer adaptive Test (CAT) for the pre-training test make the web-based training more efficient. The advantages of CAT are self-pacing, eff iciency, time and cost saving, immediate scoring and feedback, accuracy and secu rity, etc (Rudner, 1998; UMN, 1999; Novell, 2000; Linacre, 2000; Windowsglore, 2 000). Moreover, Web-based CAT also gives greater flexibility and convenience. T his paper describes how this CAT tool is built, how it helps instructor identify the strengths and weaknesses of trainees, and how to assure quality on the CAT system. 展开更多
关键词 CAT TEST pre-training Assessment Through the Web
下载PDF
Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
8
作者 Zhixiang Ji Xiaohui Wang +1 位作者 Jie Zhang Di Wu 《Global Energy Interconnection》 EI CSCD 2023年第4期493-504,共12页
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power... With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid. 展开更多
关键词 Power-grid dispatch fault handling Knowledge graph pre-trained model Auxiliary decision-making
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
9
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Pre-Training Physics-Informed Neural Network with Mixed Sampling and Its Application in High-Dimensional Systems 被引量:1
10
作者 LIU Haiyi ZHANG Yabin WANG Lei 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2024年第2期494-510,共17页
Recently,the physics-informed neural network shows remarkable ability in the context of solving the low-dimensional nonlinear partial differential equations.However,for some cases of high-dimensional systems,such tech... Recently,the physics-informed neural network shows remarkable ability in the context of solving the low-dimensional nonlinear partial differential equations.However,for some cases of high-dimensional systems,such technique may be time-consuming and inaccurate.In this paper,the authors put forward a pre-training physics-informed neural network with mixed sampling(pPINN)to address these issues.Just based on the initial and boundary conditions,the authors design the pre-training stage to filter out the set of the misfitting points,which is regarded as part of the training points in the next stage.The authors further take the parameters of the neural network in Stage 1 as the initialization in Stage 2.The advantage of the proposed approach is that it takes less time to transfer the valuable information from the first stage to the second one to improve the calculation accuracy,especially for the high-dimensional systems.To verify the performance of the pPINN algorithm,the authors first focus on the growing-and-decaying mode of line rogue wave in the Davey-Stewartson I equation.Another case is the accelerated motion of lump in the inhomogeneous Kadomtsev-Petviashvili equation,which admits a more complex evolution than the uniform equation.The exact solution provides a perfect sample for data experiments,and can also be used as a reference frame to identify the performance of the algorithm.The experiments confirm that the pPINN algorithm can improve the prediction accuracy and training efficiency well,and reduce the training time to a large extent for simulating nonlinear waves of high-dimensional equations. 展开更多
关键词 High-dimensional systems mixed sampling nonlinear wave pre-training physics-informed neural network
原文传递
y-Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning
11
作者 Yitao LIU Chenxin AN Xipeng QIU 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期107-116,共10页
With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses o... With current success of large-scale pre-trained models(PTMs),how efficiently adapting PTMs to downstream tasks has attracted tremendous attention,especially for PTMs with billions of parameters.Previous work focuses on designing parameter-efficient tuning paradigms but needs to save and compute the gradient of the whole computational graph.In this paper,we propose y-Tuning,an efficient yet effective paradigm to adapt frozen large-scale PTMs to specific downstream tasks.y-Tuning learns dense representations for labels y defined in a given task and aligns them to fixed feature representation.Without computing the gradients of text encoder at training phrase,y-Tuning is not only parameterefficient but also training-efficient.Experimental results show that for DeBERTaxxL with 1.6 billion parameters,y-Tuning achieves performance more than 96%of full fine-tuning on GLUE Benchmark with only 2%tunable parameters and much fewer training costs. 展开更多
关键词 pre-trained model lightweight fine-tuning paradigms label representation
原文传递
Pre-trained models for natural language processing: A survey 被引量:158
12
作者 QIU XiPeng SUN TianXiang +3 位作者 XU YiGe SHAO YunFan DAI Ning HUANG XuanJing 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2020年第10期1872-1897,共26页
Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language rep... Recently, the emergence of pre-trained models(PTMs) has brought natural language processing(NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives. Next,we describe how to adapt the knowledge of PTMs to downstream tasks. Finally, we outline some potential directions of PTMs for future research. This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks. 展开更多
关键词 deep learning neural network natural language processing pre-trained model distributed representation word embedding self-supervised learning language modelling
原文传递
Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey 被引量:10
13
作者 Xiao Wang Guangyao Chen +5 位作者 Guangwu Qian Pengcheng Gao Xiao-Yong Wei Yaowei Wang Yonghong Tian Wen Gao 《Machine Intelligence Research》 EI CSCD 2023年第4期447-482,共36页
With the urgent demand for generalized deep models,many pre-trained big models are proposed,such as bidirectional encoder representations(BERT),vision transformer(ViT),generative pre-trained transformers(GPT),etc.Insp... With the urgent demand for generalized deep models,many pre-trained big models are proposed,such as bidirectional encoder representations(BERT),vision transformer(ViT),generative pre-trained transformers(GPT),etc.Inspired by the success of these models in single domains(like computer vision and natural language processing),the multi-modal pre-trained big models have also drawn more and more attention in recent years.In this work,we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works.Specifically,we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning,pre-training works in natural language process,computer vision,and speech.Then,we introduce the task definition,key challenges,and advantages of multi-modal pre-training models(MM-PTMs),and discuss the MM-PTMs with a focus on data,objectives,network architectures,and knowledge enhanced pre-training.After that,we introduce the downstream tasks used for the validation of large-scale MM-PTMs,including generative,classification,and regression tasks.We also give visualization and analysis of the model parameters and results on representative downstream tasks.Finally,we point out possible research directions for this topic that may benefit future works.In addition,we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models:https://github.com/wangxiao5791509/MultiModal_BigModels_Survey. 展开更多
关键词 Multi-modal(MM) pre-trained model(PTM) information fusion representation learning deep learning
原文传递
VLP:A Survey on Vision-language Pre-training 被引量:5
14
作者 Fei-Long Chen Du-Zhen Zhang +4 位作者 Ming-Lun Han Xiu-Yi Chen Jing Shi Shuang Xu Bo Xu 《Machine Intelligence Research》 EI CSCD 2023年第1期38-56,共19页
In the past few years,the emergence of pre-training models has brought uni-modal fields such as computer vision(CV)and natural language processing(NLP)to a new era.Substantial works have shown that they are beneficial... In the past few years,the emergence of pre-training models has brought uni-modal fields such as computer vision(CV)and natural language processing(NLP)to a new era.Substantial works have shown that they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch.So can such pre-trained models be applied to multi-modal tasks?Researchers have ex-plored this problem and made significant progress.This paper surveys recent advances and new frontiers in vision-language pre-training(VLP),including image-text and video-text pre-training.To give readers a better overall grasp of VLP,we first review its recent ad-vances in five aspects:feature extraction,model architecture,pre-training objectives,pre-training datasets,and downstream tasks.Then,we summarize the specific VLP models in detail.Finally,we discuss the new frontiers in VLP.To the best of our knowledge,this is the first survey focused on VLP.We hope that this survey can shed light on future research in the VLP field. 展开更多
关键词 Vision and language pre-training TRANSFORMERS multimodal learning representation learning
原文传递
Offline Pre-trained Multi-agent Decision Transformer 被引量:2
15
作者 Linghui Meng Muning Wen +8 位作者 Chenyang Le Xiyun Li Dengpeng Xing Weinan Zhang Ying Wen Haifeng Zhang Jun Wang Yaodong Yang Bo Xu 《Machine Intelligence Research》 EI CSCD 2023年第2期233-248,共16页
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement... Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL. 展开更多
关键词 pre-training model multi-agent reinforcement learning(MARL) decision making TRANSFORMER offline reinforcement learning
原文传递
EVA2.0:Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training 被引量:2
16
作者 Yuxian Gu Jiaxin Wen +8 位作者 Hao Sun Yi Song Pei Ke Chujie Zheng Zheng Zhang Jianzhu Yao Lei Liu Xiaoyan Zhu Minlie Huang 《Machine Intelligence Research》 EI CSCD 2023年第2期207-219,共13页
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue ... Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue model,ignoring the discussion of some key factors towards a powerful human-like chatbot,especially in Chinese scenarios.In this paper,we conduct extensive experiments to investigate these under-explored factors,including data quality control,model architecture designs,training approaches,and decoding strategies.We propose EVA2.0,a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters,and will make our models and codes publicly available.Automatic and human evaluations show that EVA2.0 significantly outperforms other open-source counterparts.We also discuss the limitations of this work by presenting some failure cases and pose some future research directions on large-scale Chinese open-domain dialogue systems. 展开更多
关键词 Natural language processing deep learning(DL) large-scale pre-training dialogue systems Chinese open-domain conversational model
原文传递
Knowledge Enhanced Pre-Training Model for Vision-Language-Navigation Task 被引量:1
17
作者 HUANG Jitao ZENG Guohui +3 位作者 HUANG Bo GAO Yongbin LIU Jin SHI Zhicai 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2021年第2期147-155,共9页
Vision-Language-Navigation(VLN) task is a cross-modality task that combines natural language processing and computer vision. This task requires the agent to automatically move to the destination according to the natur... Vision-Language-Navigation(VLN) task is a cross-modality task that combines natural language processing and computer vision. This task requires the agent to automatically move to the destination according to the natural language instruction and the observed surrounding visual information. To make the best decision, in every step during the navigation, the agent should pay more attention to understanding the objects, the object attributes, and the object relationships. But most current methods process all received textual and visual information equally. Therefore, this paper integrates more detailed semantic connections between visual and textual information through three pre-training tasks(object prediction, object attributes prediction, and object relationship prediction). The model will learn better fusion representation and alignment between these two types of information to improve the success rate(SR) and generalization. The experiments show that compared with the former baseline models, the SR on the unseen validation set(Val Unseen) increased by 7%, and the SR weighted by path length(SPL) increased by 7%;the SR on the test set(Test) increased 4%, SPL increased by 3%. 展开更多
关键词 pre-training cross-modality deep learning scene graph
原文传递
Unsupervised statistical text simplification using pre-trained language modeling for initialization 被引量:1
18
作者 Jipeng QIANG Feng ZHANG +3 位作者 Yun LI Yunhao YUAN Yi ZHU Xindong WU 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第1期81-90,共10页
Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based mach... Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines. 展开更多
关键词 text simplification pre-trained language modeling BERT word embeddings
原文传递
A comparative analysis of paddy crop biotic stress classification using pre-trained deep neural networks 被引量:1
19
作者 Naveen N.Malvade Rajesh Yakkundimath +2 位作者 Girish Saunshi Mahantesh C.Elemmi Parashuram Baraki 《Artificial Intelligence in Agriculture》 2022年第1期167-175,共9页
The agriculture sector is no exception to the widespread usage of deep learning tools and techniques.In this paper,an automated detection method on the basis of pre-trained Convolutional Neural Network(CNN)models is p... The agriculture sector is no exception to the widespread usage of deep learning tools and techniques.In this paper,an automated detection method on the basis of pre-trained Convolutional Neural Network(CNN)models is proposed to identify and classify paddy crop biotic stresses from the field images.The proposed work also provides the empirical comparison among the leading CNN models with transfer learning from the ImageNet weights namely,Inception-V3,VGG-16,ResNet-50,DenseNet-121 and MobileNet-28.Brown spot,hispa,and leaf blast,three of the most common and destructive paddy crop biotic stresses that occur during the flowering and ripening growth stages are considered for the experimentation.The experimental results reveal that the ResNet-50 model achieves the highest average paddy crop stress classification accuracy of 92.61%outperforming the other considered CNN models.The study explores the feasibility of CNN models for the paddy crop stress identification as well as the applicability of automated methods to non-experts. 展开更多
关键词 Paddy crop Stress classification Biotic stress PlantVillage ImageNet pre-trained CNN models
原文传递
DenseCL:A simple framework for self-supervised dense visual pre-training 被引量:1
20
作者 Xinlong Wang Rufeng Zhang +1 位作者 Chunhua Shen Tao Kong 《Visual Informatics》 EI 2023年第1期30-40,共11页
Self-supervised learning aims to learn a universal feature representation without labels.To date,most existing self-supervised learning methods are designed and optimized for image classification.These pre-trained mod... Self-supervised learning aims to learn a universal feature representation without labels.To date,most existing self-supervised learning methods are designed and optimized for image classification.These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction.To fill this gap,we aim to design an effective,dense self-supervised learning framework that directly works at the level of pixels(or local features)by taking into account the correspondence between local features.Specifically,we present dense contrastive learning(DenseCL),which implements self-supervised learning by optimizing a pairwise contrastive(dis)similarity loss at the pixel level between two views of input images.Compared to the supervised ImageNet pre-training and other self-supervised learning methods,our self-supervised DenseCL pretraining demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection,semantic segmentation and instance segmentation.Specifically,our approach significantly outperforms the strong MoCo-v2 by 2.0%AP on PASCAL VOC object detection,1.1%AP on COCO object detection,0.9%AP on COCO instance segmentation,3.0%mIoU on PASCAL VOC semantic segmentation and 1.8%mIoU on Cityscapes semantic segmentation.The improvements are up to 3.5%AP and 8.8%mIoU over MoCo-v2,and 6.1%AP and 6.1%mIoU over supervised counterpart with frozen-backbone evaluation protocol. 展开更多
关键词 Self-supervised learning Visual pre-training Dense prediction tasks
原文传递
上一页 1 2 131 下一页 到第
使用帮助 返回顶部