期刊文献+
共找到286篇文章
< 1 2 15 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
基于条件对抗增强的Transformer煤矿微震定位方法
2
作者 丁琳琳 胡永亮 +2 位作者 李昱达 王凯璐 王慧颖 《计算机与数字工程》 2024年第1期1-8,17,共9页
随着人工智能技术的发展以及煤矿微震监测系统的广泛应用,越来越多的深度学习模型被应用到煤矿微震事件震源定位问题的求解上。然而,由于目前的微震数据量小且数据单一不足以训练大且深的神经网络模型,而小且浅的神经网络模型也不足以... 随着人工智能技术的发展以及煤矿微震监测系统的广泛应用,越来越多的深度学习模型被应用到煤矿微震事件震源定位问题的求解上。然而,由于目前的微震数据量小且数据单一不足以训练大且深的神经网络模型,而小且浅的神经网络模型也不足以表征受多方因素影响的微震事件的震源,因而导致了定位模型定位精度低和鲁棒性弱,在实际生产生活中表现较差,严重地阻碍了深度学习模型在微震定位领域上的发展。针对上述问题,提出一种基于条件对抗增强的Transformer煤矿微震定位方法CGAN-Transformer,该方法首先通过一个CGAN架构的网络模型将数据量少且单一的微震数据增强成数据量庞大且具有一定多样性的微震数据;其次,利用Transformer编码器层将微震波形数据转换为特征数据后再利用其注意力机制进一步学习微震波形数据深层次特征和复杂的站间依赖关系,同时也利用高斯分布随机变量抵消了不同地质条件对定位精度的影响;最后,通过引入混合密度输出层获取高斯分布参数,计算最优的震源位置。在智利和辽宁某矿数据集上的实验结果验证了该方法的有效性,结果表明该方法所获得的震中误差与震源误差均优于其他方法,在两个数据集上的定位误差分别降低了38%和12%,达到了提高震源定位精度和定位模型鲁棒性的目的。 展开更多
关键词 生成对抗网络 transformer模型 微震定位 注意力机制 混合密度网络
下载PDF
改良Transformer模型应用于乳腺结节超声报告自主生成的可行性研究
3
作者 王怡 周鑫仪 +2 位作者 徐黎明 邓丹 冉海涛 《临床超声医学杂志》 CSCD 2024年第2期114-119,共6页
目的将改良Transformer模型应用于乳腺结节超声报告自主生成,并对其可行性进行初步探讨。方法收集832例乳腺结节患者(共1284个结节)的超声图像构建BND数据集,引入一种改良Transformer模型对BND数据集进行智能分析,生成相应文本报告,并与... 目的将改良Transformer模型应用于乳腺结节超声报告自主生成,并对其可行性进行初步探讨。方法收集832例乳腺结节患者(共1284个结节)的超声图像构建BND数据集,引入一种改良Transformer模型对BND数据集进行智能分析,生成相应文本报告,并与Ensemble Model、SSD、R-FCN模型进行比较;同时引入LGK数据集,将改良Transformer模型与TieNet、Kerp、VTI、RNCM模型进行比较。采用BLEU评分评估各模型的性能。结果在BND数据集中,改良模型的BLEU-1、BLEU-2、BLEU-3及BLEU-4评分分别为0.547、0.474、0.352、0.282,均高于Ensemble Model、SSD、R-FCN模型。在LGK数据集中,改良Transformer模型的BLEU-1、BLEU-2、BLEU-3及BLEU-4评分分别为0.579、0.391、0.288、0.152。结论改良Transformer模型能够快速识别乳腺结节并自主生成标准报告,与Ensemble Model、SSD、R-FCN模型相比,获得了良好的BLEU评分,同时该模型在LGK数据集中BLEU评分也较高,表明改良Transformer模型具有较高的文本泛化性能。 展开更多
关键词 深度学习 transformer模型 乳腺结节 报告生成
下载PDF
基于Transformer和GAN的对抗样本生成算法
4
作者 刘帅威 李智 +1 位作者 王国美 张丽 《计算机工程》 CAS CSCD 北大核心 2024年第2期180-187,共8页
对抗攻击与防御是计算机安全领域的一个热门研究方向。针对现有基于梯度的对抗样本生成方法可视质量差、基于优化的方法生成效率低的问题,提出基于Transformer和生成对抗网络(GAN)的对抗样本生成算法Trans-GAN。首先利用Transformer强... 对抗攻击与防御是计算机安全领域的一个热门研究方向。针对现有基于梯度的对抗样本生成方法可视质量差、基于优化的方法生成效率低的问题,提出基于Transformer和生成对抗网络(GAN)的对抗样本生成算法Trans-GAN。首先利用Transformer强大的视觉表征能力,将其作为重构网络,用于接收干净图像并生成攻击噪声;其次将Transformer重构网络作为生成器,与基于深度卷积网络的鉴别器相结合组成GAN网络架构,提高生成图像的真实性并保证训练的稳定性,同时提出改进的注意力机制Targeted Self-Attention,在训练网络时引入目标标签作为先验知识,指导网络模型学习生成具有特定攻击目标的对抗扰动;最后利用跳转连接将对抗噪声施加在干净样本上,形成对抗样本,攻击目标分类网络。实验结果表明:Trans-GAN算法针对MNIST数据集中2种模型的攻击成功率都达到99.9%以上,针对CIFAR10数据集中2种模型的攻击成功率分别达到96.36%和98.47%,优于目前先进的基于生成式的对抗样本生成方法;相比快速梯度符号法和投影梯度下降法,Trans-GAN算法生成的对抗噪声扰动量更小,形成的对抗样本更加自然,满足人类视觉不易分辨的要求。 展开更多
关键词 深度神经网络 对抗样本 对抗攻击 transformer模型 生成对抗网络 注意力机制
下载PDF
基于Transformer模型的文本自动摘要生成
5
作者 刘志敏 张琨 朱浩华 《计算机与数字工程》 2024年第2期482-486,527,共6页
论文探讨文本摘要的自动生成技术,其任务是产生能够表达文本主要含义的简明摘要。传统的Seq2Seq结构模型对长期特征和全局特征的捕获和存储能力有限,导致所生成的摘要中缺乏重要信息。因此,论文基于Transformer模型提出了一种新的生成... 论文探讨文本摘要的自动生成技术,其任务是产生能够表达文本主要含义的简明摘要。传统的Seq2Seq结构模型对长期特征和全局特征的捕获和存储能力有限,导致所生成的摘要中缺乏重要信息。因此,论文基于Transformer模型提出了一种新的生成式文本摘要模型RC-Transformer-PGN(RCTP)。该模型首先使用了一个附加的基于双向GRU的编码器来扩展Transformer模型,以捕获顺序上下文表示并提高局部信息的捕捉能力,其次引入指针生成网络以及覆盖机制缓解未登录词和重复词问题。在CNN/Daily Mail数据集上的实验结果表明论文模型与基线模型相比更具竞争力。 展开更多
关键词 生成式文本摘要 transformer模型 指针生成网络 覆盖机制
下载PDF
基于指针生成网络和扩展Transformer的多属性可控文本摘要模型
6
作者 冼广铭 李凡龙 郑兆明 《计算机系统应用》 2024年第4期246-253,共8页
模型可以生成符合用户偏好的摘要.之前的摘要模型侧重于单独控制某个属性,而不是多个属性的组合.传统的Seq2Seq多属性可控文本摘要模型在满足多个控制属性时,存在无法整合所有控制属性、无法准确再现文本中关键信息和无法处理单词表外... 模型可以生成符合用户偏好的摘要.之前的摘要模型侧重于单独控制某个属性,而不是多个属性的组合.传统的Seq2Seq多属性可控文本摘要模型在满足多个控制属性时,存在无法整合所有控制属性、无法准确再现文本中关键信息和无法处理单词表外单词等问题.为此,本文提出了一种基于扩展Transformer和指针生成网络(pointer generator network,PGN)的模型.模型中的扩展Transformer将Transformer单编码器-单解码器的模型形式扩展成具有双重文本语义信息提取的双编码器和单个可融合指导信号特征的解码器形式.然后利用指针生成网络模型选择从源文本中复制单词或利用词汇表生成新的摘要信息,以解决摘要任务中常出现的OOV(out of vocabulary)问题.此外,为高效完成位置信息编码,模型在注意力层中使用相对位置表示来引入文本的序列信息.模型可以用于控制摘要的许多重要属性,包括长度、主题和具体性等.通过在公开数据集MACSum上的实验表明,相较以往方法,本文提出的模型在确保摘要质量的同时,更加符合用户给定的属性要求. 展开更多
关键词 深度学习 可控文本摘要 transformer模型 相对位置表示 指针生成网络
下载PDF
AB-Gen:Antibody Library Design with Generative Pre-trained Transformer and Deep Reinforcement Learning 被引量:1
7
作者 Xiaopeng Xu Tiantian Xu +5 位作者 Juexiao Zhou Xingyu Liao Ruochi Zhang Yu Wang Lu Zhang Xin Gao 《Genomics, Proteomics & Bioinformatics》 SCIE CAS CSCD 2023年第5期1043-1053,共11页
Antibody leads must fulfill multiple desirable properties to be clinical candidates.Primarily due to the low throughput in the experimental procedure,the need for such multiproperty optimization causes the bottleneck ... Antibody leads must fulfill multiple desirable properties to be clinical candidates.Primarily due to the low throughput in the experimental procedure,the need for such multiproperty optimization causes the bottleneck in preclinical antibody discovery and development,because addressing one issue usually causes another.We developed a reinforcement learning(RL)method,named AB-Gen,for antibody library design using a generative pre-trained transformer(GPT)as the policy network of the RL agent.We showed that this model can learn the antibody space of heavy chain complementarity determining region 3(CDRH3)and generate sequences with similar property distributions.Besides,when using human epidermal growth factor receptor-2(HER2)as the target,the agent model of AB-Gen was able to generate novel CDRH3 sequences that fulfill multi-property constraints.Totally,509 generated sequences were able to pass all property filters,and three highly conserved residues were identified.The importance of these residues was further demonstrated by molecular dynamics simulations,consolidating that the agent model was capable of grasping important information in this complex optimization task.Overall,the ABGen method is able to design novel antibody sequences with an improved success rate than the traditional propose-then-filter approach.It has the potential to be used in practical antibody design,thus empowering the antibody discovery and development process.The source code of AB-Gen is freely available at Zenodo(https://doi.org/10.5281/zenodo.7657016)and BioCode(https://ngdc.cncb.ac.cn/biocode/tools/BT007341). 展开更多
关键词 Protein design transformer Reinforcement learning generative modeling Multi-objective optimization
原文传递
基于Transformer和多特征融合的DGA域名检测方法 被引量:3
8
作者 余子丞 凌捷 《计算机工程与科学》 CSCD 北大核心 2023年第8期1416-1423,共8页
针对域名生成算法生成的恶意域名隐蔽性高,现有方法在恶意域名检测上准确率不高的问题,提出一种基于Transformer和多特征融合的DGA域名检测方法。该方法使用Transformer编码器捕获域名字符的全局信息,通过并行深度卷积神经网络获取不同... 针对域名生成算法生成的恶意域名隐蔽性高,现有方法在恶意域名检测上准确率不高的问题,提出一种基于Transformer和多特征融合的DGA域名检测方法。该方法使用Transformer编码器捕获域名字符的全局信息,通过并行深度卷积神经网络获取不同粒度的长距离上下文特征,同时引入双向长短期记忆网络BiLSTM和自注意力机制Self-Attention结合浅层CNN得到浅层时空特征,融合长距离上下文特征和浅层时空特征进行DGA域名检测。实验结果表明,所提方法在恶意域名检测方法上有更好的性能。相对于CNN、LSTM、L-PCAL和SW-DRN,所提方法在二分类实验中准确率分别提升了1.72%,1.10%,0.75%和0.34%;在多分类实验中准确率分别提升了1.75%,1.29%,0.88%和0.83%。 展开更多
关键词 域名生成算法 transformer模型 深度卷积神经网络 双向长短期记忆网络 自注意力机制
下载PDF
Evaluating the role of large language models in inflammatory bowel disease patient information
9
作者 Eun Jeong Gong Chang Seok Bang 《World Journal of Gastroenterology》 SCIE CAS 2024年第29期3538-3540,共3页
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r... This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications. 展开更多
关键词 Crohn’s disease Ulcerative colitis Inflammatory bowel disease Chat generative pre-trained transformer Large language model Artificial intelligence
下载PDF
基于Transformer架构的高超声速飞行器轨迹生成与预测算法
10
作者 狄子琦 王翔宇 +1 位作者 吴双 周宇 《空天防御》 2023年第4期35-41,共7页
随着高超声速飞行器打击水平的不断增强,如何对其进行精确轨迹预测成为防守方的研究焦点。本文结合Transformer深度学习模型,提出基于参数估计的轨迹预测模型。首先,从空气动力学视角建构影响飞行器机动的控制参数模型,并归纳不同机动... 随着高超声速飞行器打击水平的不断增强,如何对其进行精确轨迹预测成为防守方的研究焦点。本文结合Transformer深度学习模型,提出基于参数估计的轨迹预测模型。首先,从空气动力学视角建构影响飞行器机动的控制参数模型,并归纳不同机动模式下控制参数的变化规律;其次,建立基于Transformer架构的轨迹控制参数预测模型,并设计平衡优化控制参数与物理轨迹的神经网络损失函数;最后,利用控制参数模型生成多条不同机动模式下的仿真轨迹数据,并输入轨迹预测模型以学习控制参数与轨迹数据的变化规律;此外,将测试轨迹数据输入训练所得的模型以获得模型性能的测试结果。结果表明:本文所提出的轨迹预测模型对不同机动模式下的高超声速飞行器轨迹都有良好的预测效果。 展开更多
关键词 高超声速飞行器 轨迹生成 轨迹预测 空气动力学 transformer模型 控制参数优化
下载PDF
Generative pretrained transformer 4:an innovative approach to facilitate value-based healthcare
11
作者 Han Lyu Zhixiang Wang +6 位作者 Jia Li Jing Sun Xinghao Wang Pengling Ren Linkun Cai Zhenchang Wang Max Wintermark 《Intelligent Medicine》 EI CSCD 2024年第1期10-15,共6页
Objective Appropriate medical imaging is important for value-based care.We aim to evaluate the performance of generative pretrained transformer 4(GPT-4),an innovative natural language processing model,providing approp... Objective Appropriate medical imaging is important for value-based care.We aim to evaluate the performance of generative pretrained transformer 4(GPT-4),an innovative natural language processing model,providing appropriate medical imaging automatically in different clinical scenarios.Methods Institutional Review Boards(IRB)approval was not required due to the use of nonidentifiable data.Instead,we used 112 questions from the American College of Radiology(ACR)Radiology-TEACHES Program as prompts,which is an open-sourced question and answer program to guide appropriate medical imaging.We included 69 free-text case vignettes and 43 simplified cases.For the performance evaluation of GPT-4 and GPT-3.5,we considered the recommendations of ACR guidelines as the gold standard,and then three radiologists analyzed the consistency of the responses from the GPT models with those of the ACR.We set a five-score criterion for the evaluation of the consistency.A paired t-test was applied to assess the statistical significance of the findings.Results For the performance of the GPT models in free-text case vignettes,the accuracy of GPT-4 was 92.9%,whereas the accuracy of GPT-3.5 was just 78.3%.GPT-4 can provide more appropriate suggestions to reduce the overutilization of medical imaging than GPT-3.5(t=3.429,P=0.001).For the performance of the GPT models in simplified scenarios,the accuracy of GPT-4 and GPT-3.5 was 66.5%and 60.0%,respectively.The differences were not statistically significant(t=1.858,P=0.070).GPT-4 was characterized by longer reaction times(27.1 s in average)and extensive responses(137.1 words on average)than GPT-3.5.Conclusion As an advanced tool for improving value-based healthcare in clinics,GPT-4 may guide appropriate medical imaging accurately and efficiently。 展开更多
关键词 generative pretrained transformer 4 model Natural language processing Medical imaging APPROPRIATENESS
原文传递
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
12
作者 Liqiang Jing Yiren Li +3 位作者 Junhao Xu Yongcan Yu Pei Shen Xuemeng Song 《Machine Intelligence Research》 EI CSCD 2023年第2期289-298,共10页
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM... Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines. 展开更多
关键词 Multimodal sentence summarization(MMSS) generative pre-trained language model(GPLM) natural language generation deep learning artificial intelligence
原文传递
基于Transformer的强泛化苹果叶片病害识别模型 被引量:9
13
作者 徐艳蕾 孔朔琳 +2 位作者 陈清源 高志远 李陈孝 《农业工程学报》 EI CAS CSCD 北大核心 2022年第16期198-206,共9页
模型泛化能力是病害识别模型多场景应用的关键,该研究针对不同环境下的苹果叶片病害数据,提出一种可以提取多类型特征的强泛化苹果叶片病害识别模型CaTNet。该模型采用双分支结构,首先设计了一种卷积神经网络分支,负责提取苹果叶片图像... 模型泛化能力是病害识别模型多场景应用的关键,该研究针对不同环境下的苹果叶片病害数据,提出一种可以提取多类型特征的强泛化苹果叶片病害识别模型CaTNet。该模型采用双分支结构,首先设计了一种卷积神经网络分支,负责提取苹果叶片图像的局部特征,其次构建了具有挤压和扩充功能的视觉Transformer分支,该分支能够提取苹果叶片图像的全局特征,最后将两种特征进行融合,使Transformer分支可以学习局部特征,使卷积神经网络分支学习全局特征。与多种卷积神经网络模型和Transformer模型相比,该模型具有更好的泛化能力,仅需学习实验室环境叶片数据,即可在自然环境数据下达到80%的识别精度,相较卷积神经网络EfficientNetV2的72.14%精度和Transformer网络PVT的52.72%精度均有较大提升,能够有效提升对不同环境数据的识别精度,解决了深度学习模型训练成本高,泛化能力弱的问题。 展开更多
关键词 图像识别 农业 卷积神经网络 苹果叶片病害 transformer模型 强泛化性 特征融合
下载PDF
结合Transformer模型与深度神经网络的数据到文本生成方法 被引量:11
14
作者 许晓泓 何霆 +1 位作者 王华珍 陈坚 《重庆大学学报(自然科学版)》 EI CAS CSCD 北大核心 2020年第7期91-100,共10页
数据到文本的生成是指从结构化数据生成连贯文本的一种自然语言处理方法。近年来,由于端到端训练的深度神经网络的应用,数据到文本生成的方法显示出了巨大潜力。该方法能够处理大量数据自动生成连贯性文本,常用于新闻写作、报告生成等... 数据到文本的生成是指从结构化数据生成连贯文本的一种自然语言处理方法。近年来,由于端到端训练的深度神经网络的应用,数据到文本生成的方法显示出了巨大潜力。该方法能够处理大量数据自动生成连贯性文本,常用于新闻写作、报告生成等场景。然而,已有研究中对于数据中具体数值、时间等数据信息的推理存在较大缺陷,无法充分利用数据间的结构信息给出合理的生成指引,并且生成过程容易出现语义与句法分离训练的问题。因此,文中提出一种结合Transformer模型与深度神经网络的数据到文本生成方法,并提出一个用于内容规划的Transformer Text Planning(TTP)算法,有效地解决上述问题。在Rotowire公开数据集上进行方法验证,实验结果表明,文中方法性能优于已有数据到文本生成模型,可直接应用于结构化数据到连贯性文本的生成任务中,具有一定的实际应用价值。 展开更多
关键词 文本生成 transformer模型 内容预选 内容规划 深度神经网络
下载PDF
Perturbation to Noether Symmetries and Adiabatic Invariants for Generalized Birkhoff Systems Based on El-Nabulsi Dynamical Model 被引量:2
15
作者 宋传静 张毅 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2015年第4期421-427,共7页
With the action of small perturbation on generalized El-Nabulsi-Birkhoff fractional equations,the perturbation to Noether symmetries and adiabatic invariants are studied under the framework of El-Nabulsi′s fractional... With the action of small perturbation on generalized El-Nabulsi-Birkhoff fractional equations,the perturbation to Noether symmetries and adiabatic invariants are studied under the framework of El-Nabulsi′s fractional model.Firstly,based on the invariance of El-Nabulsi-Pfaff action under the infinitesimal transformations of group,the exact invariants are given.Secondly,on the basis of the definition of higher order adiabatic invariants of a dynamical system,the adiabatic invariants of the Noether symmetric perturbation for disturbed generalized El-Nabulsi′s fractional Birkhoff system are presented under some conditions,and some special cases are discussed.Finally,an example known as Hojman-Urrutia problem is given to illustrate the application of the results. 展开更多
关键词 perturbation to Noether symmetry adiabatic invariant El-Nabulsi dynamical model generalized Birk-hoff system infinitesimal transformation
下载PDF
Enhancing the resolution of seismic data based on the generalized S-transform 被引量:3
16
作者 Tian Jianhua Song Wei Yang Feizhou 《Petroleum Science》 SCIE CAS CSCD 2009年第2期153-157,共5页
In this paper, we analyze the seismic signal in the time-frequency domain using the generalized S-transform combined with spectrum modeling. Without assuming that the reflection coefficients are random white noise as ... In this paper, we analyze the seismic signal in the time-frequency domain using the generalized S-transform combined with spectrum modeling. Without assuming that the reflection coefficients are random white noise as in the conventional resolution-enhanced techniques, the wavelet which changes with time and frequency was simulated and eliminated. After using the inverse S-transform for the processed instantaneous spectrum, the signal in the time domain was obtained again with a more balanced spectrum and broader frequency band. The quality of seismic data was improved without additional noise. 展开更多
关键词 Time-frequency domain generalized S-transform spectrum modeling instantaneous spectrum balanced spectrum
下载PDF
Generalized Thermo Elasticity in an Infinite Nonhomogeneous Solid Having a Spherical Cavity Using DPL Model
17
作者 Ahmed Elsayed Abouelrega 《Applied Mathematics》 2011年第5期625-632,共8页
The induced temperature, displacement, and stress fields in an infinite nonhomogeneous elastic medium having a spherical cavity are obtained in the context dual-phase-lag model. The surface of the cavity is stress fre... The induced temperature, displacement, and stress fields in an infinite nonhomogeneous elastic medium having a spherical cavity are obtained in the context dual-phase-lag model. The surface of the cavity is stress free and is subjected to a thermal shock. The material is elastic and has an in?homogeneity in the radial direction. The type of non homogeneity is such that the elastic constants, thermal conductivity and density are propor?tional to the nth power of the radial distance. The solutions are obtained analytically employing the Laplace transform technique. The numerical inversion of the transforms is carried out using Fourier series expansions. The stresses, temperature and displacement are computed and presented graphically. A comparison of the results for different theories is presented. 展开更多
关键词 generALIZED THERMO ELASTICITY NONHOMOGENEOUS Functionally Graded Material (FGM) Laplace transform Three-Phase-Lag model
下载PDF
Investigation of Probability Generating Function in an Interdependent <i>M/M/</i>1:(∞;GD) Queueing Model with Controllable Arrival Rates Using Rouche’s Theorem
18
作者 Vishwa Nath Maurya 《Open Journal of Optimization》 2012年第2期34-38,共5页
Present paper deals a M/M/1:(∞;GD) queueing model with interdependent controllable arrival and service rates where- in customers arrive in the system according to poisson distribution with two different arrivals rate... Present paper deals a M/M/1:(∞;GD) queueing model with interdependent controllable arrival and service rates where- in customers arrive in the system according to poisson distribution with two different arrivals rates-slower and faster as per controllable arrival policy. Keeping in view the general trend of interdependent arrival and service processes, it is presumed that random variables of arrival and service processes follow a bivariate poisson distribution and the server provides his services under general discipline of service rule in an infinitely large waiting space. In this paper, our central attention is to explore the probability generating functions using Rouche’s theorem in both cases of slower and faster arrival rates of the queueing model taken into consideration;which may be helpful for mathematicians and researchers for establishing significant performance measures of the model. Moreover, for the purpose of high-lighting the application aspect of our investigated result, very recently Maurya [1] has derived successfully the expected busy periods of the server in both cases of slower and faster arrival rates, which have also been presented by the end of this paper. 展开更多
关键词 Interdependent QUEUEING model BIVARIATE Poisson Process Controllable Arrival Rates Probability generating Function Laplace transform Rouche’s THEOREM Performance Measures
下载PDF
Offline Pre-trained Multi-agent Decision Transformer
19
作者 Linghui Meng Muning Wen +8 位作者 Chenyang Le Xiyun Li Dengpeng Xing Weinan Zhang Ying Wen Haifeng Zhang Jun Wang Yaodong Yang Bo Xu 《Machine Intelligence Research》 EI CSCD 2023年第2期233-248,共16页
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement... Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL. 展开更多
关键词 pre-training model multi-agent reinforcement learning(MARL) decision making transformer offline reinforcement learning
原文传递
人工智能赋能全科医生继续教育改革的研究
20
作者 张璇 陈琦 王佳贺 《中国继续医学教育》 2024年第14期18-21,共4页
我国的全科医学继续教育在大力发展中取得了一定成效,但与国际平均水平仍有差距。应用人工智能(artificial intelligence,AI)为全科医生的继续教育带来了新的机遇和可能。人工智能为载体的教育培训模式能够提高基层全科医生的诊疗水平,... 我国的全科医学继续教育在大力发展中取得了一定成效,但与国际平均水平仍有差距。应用人工智能(artificial intelligence,AI)为全科医生的继续教育带来了新的机遇和可能。人工智能为载体的教育培训模式能够提高基层全科医生的诊疗水平,并为解决现有问题提供新的答案。通过人工智能,全科医生可以更便捷地获取专业知识和最新医学进展,提升专业水平和技能。应用该教育培训模式可以提高医疗系统的效率,改善医疗资源分配不均衡。制定政策和规范,建立健全的继续教育体系,确保全科医生继续教育有效运行和发展。全科医生继续教育同时面临挑战,如保证教育内容质量和可靠性,适应新兴技术和需求等。借助互联网+和人工智能的发展,我国全科医生继续教育有新的机遇和发展空间,能进一步提升全科医生的专业素养和技能水平,为全科医学培训进步做出贡献。 展开更多
关键词 全科医学 全科医生 继续教育 医学模式转变 在线教育 改进措施
下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部