期刊文献+
共找到89篇文章
< 1 2 5 >
每页显示 20 50 100
A Modified CycleGAN for Multi-Organ Ultrasound Image Enhancement via Unpaired Pre-Training
1
作者 Haonan Han Bingyu Yang +2 位作者 Weihang Zhang Dongwei Li Huiqi Li 《Journal of Beijing Institute of Technology》 EI CAS 2024年第3期194-203,共10页
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual... Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices. 展开更多
关键词 ultrasound image enhancement handheld devices unpaired images pre-train and finetune cycleGAN
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
2
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Adapter Based on Pre-Trained Language Models for Classification of Medical Text
3
作者 Quan Li 《Journal of Electronic Research and Application》 2024年第3期129-134,共6页
We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract informa... We present an approach to classify medical text at a sentence level automatically.Given the inherent complexity of medical text classification,we employ adapters based on pre-trained language models to extract information from medical text,facilitating more accurate classification while minimizing the number of trainable parameters.Extensive experiments conducted on various datasets demonstrate the effectiveness of our approach. 展开更多
关键词 Classification of medical text ADAPTER pre-trained language model
下载PDF
Domain-specific modeling and verification for C4ISR capability requirements 被引量:4
4
作者 董庆超 王智学 +2 位作者 陈国友 蒋鑫 张婷婷 《Journal of Central South University》 SCIE EI CAS 2012年第5期1334-1340,共7页
An approach was proposed to specify the C4ISR capability of domain-specific modeling language.To confine the domain modeling within a standard architecture framework,formally a C4ISR capability meta-ontology was defin... An approach was proposed to specify the C4ISR capability of domain-specific modeling language.To confine the domain modeling within a standard architecture framework,formally a C4ISR capability meta-ontology was defined according to the meta-model of DoD Architecture Framework.The meta-ontology is used for extending UML Profile so that the domain experts can model the C4ISR domains using the C4ISR capability meta-concepts to define a domain-specific modeling language.The domain models can be then checked to guarantee the consistency and completeness through converting the UML models into the Description Logic ontology and making use of inference engine Pellet to verify the ontology. 展开更多
关键词 C4ISR capability meta-ontology domain-specific modeling description logic
下载PDF
Effective distributed convolutional neural network architecture for remote sensing images target classification with a pre-training approach 被引量:3
5
作者 LI Binquan HU Xiaohui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第2期238-244,共7页
How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classif... How to recognize targets with similar appearances from remote sensing images(RSIs) effectively and efficiently has become a big challenge. Recently, convolutional neural network(CNN) is preferred in the target classification due to the powerful feature representation ability and better performance. However,the training and testing of CNN mainly rely on single machine.Single machine has its natural limitation and bottleneck in processing RSIs due to limited hardware resources and huge time consuming. Besides, overfitting is a challenge for the CNN model due to the unbalance between RSIs data and the model structure.When a model is complex or the training data is relatively small,overfitting occurs and leads to a poor predictive performance. To address these problems, a distributed CNN architecture for RSIs target classification is proposed, which dramatically increases the training speed of CNN and system scalability. It improves the storage ability and processing efficiency of RSIs. Furthermore,Bayesian regularization approach is utilized in order to initialize the weights of the CNN extractor, which increases the robustness and flexibility of the CNN model. It helps prevent the overfitting and avoid the local optima caused by limited RSI training images or the inappropriate CNN structure. In addition, considering the efficiency of the Na¨?ve Bayes classifier, a distributed Na¨?ve Bayes classifier is designed to reduce the training cost. Compared with other algorithms, the proposed system and method perform the best and increase the recognition accuracy. The results show that the distributed system framework and the proposed algorithms are suitable for RSIs target classification tasks. 展开更多
关键词 convolutional NEURAL network (CNN) DISTRIBUTED architecture REMOTE SENSING images (RSIs) TARGET classification pre-training
下载PDF
Rac GTPases:domain-specific functions in neuronal development 被引量:1
6
作者 Steffen Norgaard Roger Pocock 《Neural Regeneration Research》 SCIE CAS CSCD 2019年第8期1367-1368,共2页
Understanding fundamental mechanisms governing axon outgrowth and guidance can inform the development of therapeutic strategies to restore neuronal function damaged though injury or disease. Axons navigate the extrace... Understanding fundamental mechanisms governing axon outgrowth and guidance can inform the development of therapeutic strategies to restore neuronal function damaged though injury or disease. Axons navigate the extracellular environment by responding to guidance cues that bind to cell surface receptors to relay information intracellularly via Rho GTPase family members, including the Rac GTPases. 展开更多
关键词 RAC GTPASES domain-specific FUNCTIONS NEURONAL DEVELOPMENT
下载PDF
NNL:a domain-specific language for neural networks 被引量:1
7
作者 Wang Bingrui Chen Yunji 《High Technology Letters》 EI CAS 2020年第2期160-167,共8页
Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting t... Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting task.In this paper,a domain-specific language(DSL)for NNs,neural network language(NNL)is proposed to deliver productivity of NN programming and portable performance of NN execution on different hardware platforms.The productivity and flexibility of NN programming are enabled by abstracting NNs as a directed graph of blocks.The language describes 4 representative and widely used NNs and runs them on 3 different hardware platforms(CPU,GPU and NN accelerator).Experimental results show that NNs written with the proposed language are,on average,14.5%better than the baseline implementations across these 3 platforms.Moreover,compared with the Caffe framework that specifically targets the GPU platform,the code can achieve similar performance. 展开更多
关键词 artificial NEURAL network(NN) domain-specific language(DSL) NEURAL network(NN)accelerator
下载PDF
Pre-training Assessment Through the Web
8
作者 Kenneth Wong Reggie Kwan Jimmy SF Chan 《厦门大学学报(自然科学版)》 CAS CSCD 北大核心 2002年第S1期297-,共1页
Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time ... Web-based training is growing quickly in popularit y for professionals in industrial organizations and large enterprises. The savings in cost and time are significant. The instructor-led trainings are bounded by time and place, not to mention the cost involved in traveling, accommodation and training venue. However, in the most online training courses, all trainees are given same training materials and teaching paradigms. The problem of differentia ting the trainees’ abilities is the main concern. We need a pre-training test t o identify and classify of the weaknesses and strengths of differentiate trainee s so as to devise an appropriate training programs for the trainees. Adaptation of a Web-based Computer adaptive Test (CAT) for the pre-training test make the web-based training more efficient. The advantages of CAT are self-pacing, eff iciency, time and cost saving, immediate scoring and feedback, accuracy and secu rity, etc (Rudner, 1998; UMN, 1999; Novell, 2000; Linacre, 2000; Windowsglore, 2 000). Moreover, Web-based CAT also gives greater flexibility and convenience. T his paper describes how this CAT tool is built, how it helps instructor identify the strengths and weaknesses of trainees, and how to assure quality on the CAT system. 展开更多
关键词 CAT TEST pre-training Assessment Through the Web
下载PDF
Building a Productive Domain-Specific Cloud for Big Data Processing and Analytics Service
9
作者 Yuzhong Yan Mahsa Hanifi +1 位作者 Liqi Yi Lei Huang 《Journal of Computer and Communications》 2015年第5期107-117,共11页
Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open sour... Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open source frameworks in Cloud Computing for storing and processing big data in the scalable fashion. Spark is the latest parallel computing engine working together with Hadoop that exceeds MapReduce performance via its in-memory computing and high level programming features. In this paper, we present our design and implementation of a productive, domain-specific big data analytics cloud platform on top of Hadoop and Spark. To increase user’s productivity, we created a variety of data processing templates to simplify the programming efforts. We have conducted experiments for its productivity and performance with a few basic but representative data processing algorithms in the petroleum industry. Geophysicists can use the platform to productively design and implement scalable seismic data processing algorithms without handling the details of data management and the complexity of parallelism. The Cloud platform generates a complete data processing application based on user’s kernel program and simple configurations, allocates resources and executes it in parallel on top of Spark and Hadoop. 展开更多
关键词 BUILDING a Productive domain-specific CLOUD for BIG Data PROCESSING and ANALYTICS SERVICE
下载PDF
Influence of Software Modeling and Design on Domain-Specific Abstract Thinking: Student’s Perspective
10
作者 Zakarya A. Alzamil 《Journal of Software Engineering and Applications》 2013年第10期543-553,共11页
Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, ... Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain. 展开更多
关键词 domain-specific ABSTRACT THINKING SOFTWARE MODELING SOFTWARE DESIGN Teaching MODELING and DESIGN SOFTWARE Engineering Education
下载PDF
Construction and application of knowledge graph for grid dispatch fault handling based on pre-trained model
11
作者 Zhixiang Ji Xiaohui Wang +1 位作者 Jie Zhang Di Wu 《Global Energy Interconnection》 EI CSCD 2023年第4期493-504,共12页
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power... With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid. 展开更多
关键词 Power-grid dispatch fault handling Knowledge graph pre-trained model Auxiliary decision-making
下载PDF
Leveraging Vision-Language Pre-Trained Model and Contrastive Learning for Enhanced Multimodal Sentiment Analysis
12
作者 Jieyu An Wan Mohd Nazmee Wan Zainon Binfen Ding 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期1673-1689,共17页
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on... Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment. 展开更多
关键词 Multimodal sentiment analysis vision–language pre-trained model contrastive learning sentiment classification
下载PDF
Pre-Training Physics-Informed Neural Network with Mixed Sampling and Its Application in High-Dimensional Systems 被引量:1
13
作者 LIU Haiyi ZHANG Yabin WANG Lei 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2024年第2期494-510,共17页
Recently,the physics-informed neural network shows remarkable ability in the context of solving the low-dimensional nonlinear partial differential equations.However,for some cases of high-dimensional systems,such tech... Recently,the physics-informed neural network shows remarkable ability in the context of solving the low-dimensional nonlinear partial differential equations.However,for some cases of high-dimensional systems,such technique may be time-consuming and inaccurate.In this paper,the authors put forward a pre-training physics-informed neural network with mixed sampling(pPINN)to address these issues.Just based on the initial and boundary conditions,the authors design the pre-training stage to filter out the set of the misfitting points,which is regarded as part of the training points in the next stage.The authors further take the parameters of the neural network in Stage 1 as the initialization in Stage 2.The advantage of the proposed approach is that it takes less time to transfer the valuable information from the first stage to the second one to improve the calculation accuracy,especially for the high-dimensional systems.To verify the performance of the pPINN algorithm,the authors first focus on the growing-and-decaying mode of line rogue wave in the Davey-Stewartson I equation.Another case is the accelerated motion of lump in the inhomogeneous Kadomtsev-Petviashvili equation,which admits a more complex evolution than the uniform equation.The exact solution provides a perfect sample for data experiments,and can also be used as a reference frame to identify the performance of the algorithm.The experiments confirm that the pPINN algorithm can improve the prediction accuracy and training efficiency well,and reduce the training time to a large extent for simulating nonlinear waves of high-dimensional equations. 展开更多
关键词 High-dimensional systems mixed sampling nonlinear wave pre-training physics-informed neural network
原文传递
May ChatGPT be a tool producing medical information for common inflammatory bowel disease patients’questions?An evidencecontrolled analysis 被引量:1
14
作者 Antonietta Gerarda Gravina Raffaele Pellegrino +6 位作者 Marina Cipullo Giovanna Palladino Giuseppe Imperio Andrea Ventura Salvatore Auletta Paola Ciamarra Alessandro Federico 《World Journal of Gastroenterology》 SCIE CAS 2024年第1期17-33,共17页
Artificial intelligence is increasingly entering everyday healthcare.Large language model(LLM)systems such as Chat Generative Pre-trained Transformer(ChatGPT)have become potentially accessible to everyone,including pa... Artificial intelligence is increasingly entering everyday healthcare.Large language model(LLM)systems such as Chat Generative Pre-trained Transformer(ChatGPT)have become potentially accessible to everyone,including patients with inflammatory bowel diseases(IBD).However,significant ethical issues and pitfalls exist in innovative LLM tools.The hype generated by such systems may lead to unweighted patient trust in these systems.Therefore,it is necessary to understand whether LLMs(trendy ones,such as ChatGPT)can produce plausible medical information(MI)for patients.This review examined ChatGPT’s potential to provide MI regarding questions commonly addressed by patients with IBD to their gastroenterologists.From the review of the outputs provided by ChatGPT,this tool showed some attractive potential while having significant limitations in updating and detailing information and providing inaccurate information in some cases.Further studies and refinement of the ChatGPT,possibly aligning the outputs with the leading medical evidence provided by reliable databases,are needed. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat Generative pre-trained Transformer Large language model Artificial intelligence
下载PDF
Personality Trait Detection via Transfer Learning
15
作者 Bashar Alshouha Jesus Serrano-Guerrero +2 位作者 Francisco Chiclana Francisco P.Romero Jose A.Olivas 《Computers, Materials & Continua》 SCIE EI 2024年第2期1933-1956,共24页
Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains,including education,e-commerce,or human resources.Tra-... Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains,including education,e-commerce,or human resources.Tra-ditional machine learning techniques have been broadly employed for personality trait identification;nevertheless,the development of new technologies based on deep learning has led to new opportunities to improve their performance.This study focuses on the capabilities of pre-trained language models such as BERT,RoBERTa,ALBERT,ELECTRA,ERNIE,or XLNet,to deal with the task of personality recognition.These models are able to capture structural features from textual content and comprehend a multitude of language facets and complex features such as hierarchical relationships or long-term dependencies.This makes them suitable to classify multi-label personality traits from reviews while mitigating computational costs.The focus of this approach centers on developing an architecture based on different layers able to capture the semantic context and structural features from texts.Moreover,it is able to fine-tune the previous models using the MyPersonality dataset,which comprises 9,917 status updates contributed by 250 Facebook users.These status updates are categorized according to the well-known Big Five personality model,setting the stage for a comprehensive exploration of personality traits.To test the proposal,a set of experiments have been performed using different metrics such as the exact match ratio,hamming loss,zero-one-loss,precision,recall,F1-score,and weighted averages.The results reveal ERNIE is the top-performing model,achieving an exact match ratio of 72.32%,an accuracy rate of 87.17%,and 84.41%of F1-score.The findings demonstrate that the tested models substantially outperform other state-of-the-art studies,enhancing the accuracy by at least 3%and confirming them as powerful tools for personality recognition.These findings represent substantial advancements in personality recognition,making them appropriate for the development of user-centric applications. 展开更多
关键词 Personality trait detection pre-trained language model big five model transfer learning
下载PDF
SHEL:a semantically enhanced hardware-friendly entity linking method
16
作者 亓东林 CHEN Shudong +2 位作者 DU Rong TONG Da YU Yong 《High Technology Letters》 EI CAS 2024年第1期13-22,共10页
With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of train... With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder. 展开更多
关键词 entity linking(EL) pre-trained models knowledge graph text summarization semantic enhancement
下载PDF
Research status and application of artificial intelligence large models in the oil and gas industry
17
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
Region-Aware Fashion Contrastive Learning for Unified Attribute Recognition and Composed Retrieval
18
作者 WANG Kangping ZHAO Mingbo 《Journal of Donghua University(English Edition)》 CAS 2024年第4期405-415,共11页
Clothing attribute recognition has become an essential technology,which enables users to automatically identify the characteristics of clothes and search for clothing images with similar attributes.However,existing me... Clothing attribute recognition has become an essential technology,which enables users to automatically identify the characteristics of clothes and search for clothing images with similar attributes.However,existing methods cannot recognize newly added attributes and may fail to capture region-level visual features.To address the aforementioned issues,a region-aware fashion contrastive language-image pre-training(RaF-CLIP)model was proposed.This model aligned cropped and segmented images with category and multiple fine-grained attribute texts,achieving the matching of fashion region and corresponding texts through contrastive learning.Clothing retrieval found suitable clothing based on the user-specified clothing categories and attributes,and to further improve the accuracy of retrieval,an attribute-guided composed network(AGCN)as an additional component on RaF-CLIP was introduced,specifically designed for composed image retrieval.This task aimed to modify the reference image based on textual expressions to retrieve the expected target.By adopting a transformer-based bidirectional attention and gating mechanism,it realized the fusion and selection of image features and attribute text features.Experimental results show that the proposed model achieves a mean precision of 0.6633 for attribute recognition tasks and a recall@10(recall@k is defined as the percentage of correct samples appearing in the top k retrieval results)of 39.18 for composed image retrieval task,satisfying user needs for freely searching for clothing through images and texts. 展开更多
关键词 attribute recognition image retrieval contrastive language-image pre-training(CLIP) image text matching transformer
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
19
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
Short-term displacement prediction for newly established monitoring slopes based on transfer learning
20
作者 Yuan Tian Yang-landuo Deng +3 位作者 Ming-zhi Zhang Xiao Pang Rui-ping Ma Jian-xue Zhang 《China Geology》 CAS CSCD 2024年第2期351-364,共14页
This study makes a significant progress in addressing the challenges of short-term slope displacement prediction in the Universal Landslide Monitoring Program,an unprecedented disaster mitigation program in China,wher... This study makes a significant progress in addressing the challenges of short-term slope displacement prediction in the Universal Landslide Monitoring Program,an unprecedented disaster mitigation program in China,where lots of newly established monitoring slopes lack sufficient historical deformation data,making it difficult to extract deformation patterns and provide effective predictions which plays a crucial role in the early warning and forecasting of landslide hazards.A slope displacement prediction method based on transfer learning is therefore proposed.Initially,the method transfers the deformation patterns learned from slopes with relatively rich deformation data by a pre-trained model based on a multi-slope integrated dataset to newly established monitoring slopes with limited or even no useful data,thus enabling rapid and efficient predictions for these slopes.Subsequently,as time goes on and monitoring data accumulates,fine-tuning of the pre-trained model for individual slopes can further improve prediction accuracy,enabling continuous optimization of prediction results.A case study indicates that,after being trained on a multi-slope integrated dataset,the TCN-Transformer model can efficiently serve as a pretrained model for displacement prediction at newly established monitoring slopes.The three-day average RMSE is significantly reduced by 34.6%compared to models trained only on individual slope data,and it also successfully predicts the majority of deformation peaks.The fine-tuned model based on accumulated data on the target newly established monitoring slope further reduced the three-day RMSE by 37.2%,demonstrating a considerable predictive accuracy.In conclusion,taking advantage of transfer learning,the proposed slope displacement prediction method effectively utilizes the available data,which enables the rapid deployment and continual refinement of displacement predictions on newly established monitoring slopes. 展开更多
关键词 LANDSLIDE Slope displacement prediction Transfer learning Integrated dataset Transformer pre-trained model Universal Landslide Monitoring Program(ULMP) Geological hazards survey engineering
下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部