Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r...This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications.展开更多
模型可以生成符合用户偏好的摘要.之前的摘要模型侧重于单独控制某个属性,而不是多个属性的组合.传统的Seq2Seq多属性可控文本摘要模型在满足多个控制属性时,存在无法整合所有控制属性、无法准确再现文本中关键信息和无法处理单词表外...模型可以生成符合用户偏好的摘要.之前的摘要模型侧重于单独控制某个属性,而不是多个属性的组合.传统的Seq2Seq多属性可控文本摘要模型在满足多个控制属性时,存在无法整合所有控制属性、无法准确再现文本中关键信息和无法处理单词表外单词等问题.为此,本文提出了一种基于扩展Transformer和指针生成网络(pointer generator network,PGN)的模型.模型中的扩展Transformer将Transformer单编码器-单解码器的模型形式扩展成具有双重文本语义信息提取的双编码器和单个可融合指导信号特征的解码器形式.然后利用指针生成网络模型选择从源文本中复制单词或利用词汇表生成新的摘要信息,以解决摘要任务中常出现的OOV(out of vocabulary)问题.此外,为高效完成位置信息编码,模型在注意力层中使用相对位置表示来引入文本的序列信息.模型可以用于控制摘要的许多重要属性,包括长度、主题和具体性等.通过在公开数据集MACSum上的实验表明,相较以往方法,本文提出的模型在确保摘要质量的同时,更加符合用户给定的属性要求.展开更多
Antibody leads must fulfill multiple desirable properties to be clinical candidates.Primarily due to the low throughput in the experimental procedure,the need for such multiproperty optimization causes the bottleneck ...Antibody leads must fulfill multiple desirable properties to be clinical candidates.Primarily due to the low throughput in the experimental procedure,the need for such multiproperty optimization causes the bottleneck in preclinical antibody discovery and development,because addressing one issue usually causes another.We developed a reinforcement learning(RL)method,named AB-Gen,for antibody library design using a generative pre-trained transformer(GPT)as the policy network of the RL agent.We showed that this model can learn the antibody space of heavy chain complementarity determining region 3(CDRH3)and generate sequences with similar property distributions.Besides,when using human epidermal growth factor receptor-2(HER2)as the target,the agent model of AB-Gen was able to generate novel CDRH3 sequences that fulfill multi-property constraints.Totally,509 generated sequences were able to pass all property filters,and three highly conserved residues were identified.The importance of these residues was further demonstrated by molecular dynamics simulations,consolidating that the agent model was capable of grasping important information in this complex optimization task.Overall,the ABGen method is able to design novel antibody sequences with an improved success rate than the traditional propose-then-filter approach.It has the potential to be used in practical antibody design,thus empowering the antibody discovery and development process.The source code of AB-Gen is freely available at Zenodo(https://doi.org/10.5281/zenodo.7657016)and BioCode(https://ngdc.cncb.ac.cn/biocode/tools/BT007341).展开更多
Objective Appropriate medical imaging is important for value-based care.We aim to evaluate the performance of generative pretrained transformer 4(GPT-4),an innovative natural language processing model,providing approp...Objective Appropriate medical imaging is important for value-based care.We aim to evaluate the performance of generative pretrained transformer 4(GPT-4),an innovative natural language processing model,providing appropriate medical imaging automatically in different clinical scenarios.Methods Institutional Review Boards(IRB)approval was not required due to the use of nonidentifiable data.Instead,we used 112 questions from the American College of Radiology(ACR)Radiology-TEACHES Program as prompts,which is an open-sourced question and answer program to guide appropriate medical imaging.We included 69 free-text case vignettes and 43 simplified cases.For the performance evaluation of GPT-4 and GPT-3.5,we considered the recommendations of ACR guidelines as the gold standard,and then three radiologists analyzed the consistency of the responses from the GPT models with those of the ACR.We set a five-score criterion for the evaluation of the consistency.A paired t-test was applied to assess the statistical significance of the findings.Results For the performance of the GPT models in free-text case vignettes,the accuracy of GPT-4 was 92.9%,whereas the accuracy of GPT-3.5 was just 78.3%.GPT-4 can provide more appropriate suggestions to reduce the overutilization of medical imaging than GPT-3.5(t=3.429,P=0.001).For the performance of the GPT models in simplified scenarios,the accuracy of GPT-4 and GPT-3.5 was 66.5%and 60.0%,respectively.The differences were not statistically significant(t=1.858,P=0.070).GPT-4 was characterized by longer reaction times(27.1 s in average)and extensive responses(137.1 words on average)than GPT-3.5.Conclusion As an advanced tool for improving value-based healthcare in clinics,GPT-4 may guide appropriate medical imaging accurately and efficiently。展开更多
Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MM...Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines.展开更多
数据到文本的生成是指从结构化数据生成连贯文本的一种自然语言处理方法。近年来,由于端到端训练的深度神经网络的应用,数据到文本生成的方法显示出了巨大潜力。该方法能够处理大量数据自动生成连贯性文本,常用于新闻写作、报告生成等...数据到文本的生成是指从结构化数据生成连贯文本的一种自然语言处理方法。近年来,由于端到端训练的深度神经网络的应用,数据到文本生成的方法显示出了巨大潜力。该方法能够处理大量数据自动生成连贯性文本,常用于新闻写作、报告生成等场景。然而,已有研究中对于数据中具体数值、时间等数据信息的推理存在较大缺陷,无法充分利用数据间的结构信息给出合理的生成指引,并且生成过程容易出现语义与句法分离训练的问题。因此,文中提出一种结合Transformer模型与深度神经网络的数据到文本生成方法,并提出一个用于内容规划的Transformer Text Planning(TTP)算法,有效地解决上述问题。在Rotowire公开数据集上进行方法验证,实验结果表明,文中方法性能优于已有数据到文本生成模型,可直接应用于结构化数据到连贯性文本的生成任务中,具有一定的实际应用价值。展开更多
With the action of small perturbation on generalized El-Nabulsi-Birkhoff fractional equations,the perturbation to Noether symmetries and adiabatic invariants are studied under the framework of El-Nabulsi′s fractional...With the action of small perturbation on generalized El-Nabulsi-Birkhoff fractional equations,the perturbation to Noether symmetries and adiabatic invariants are studied under the framework of El-Nabulsi′s fractional model.Firstly,based on the invariance of El-Nabulsi-Pfaff action under the infinitesimal transformations of group,the exact invariants are given.Secondly,on the basis of the definition of higher order adiabatic invariants of a dynamical system,the adiabatic invariants of the Noether symmetric perturbation for disturbed generalized El-Nabulsi′s fractional Birkhoff system are presented under some conditions,and some special cases are discussed.Finally,an example known as Hojman-Urrutia problem is given to illustrate the application of the results.展开更多
In this paper, we analyze the seismic signal in the time-frequency domain using the generalized S-transform combined with spectrum modeling. Without assuming that the reflection coefficients are random white noise as ...In this paper, we analyze the seismic signal in the time-frequency domain using the generalized S-transform combined with spectrum modeling. Without assuming that the reflection coefficients are random white noise as in the conventional resolution-enhanced techniques, the wavelet which changes with time and frequency was simulated and eliminated. After using the inverse S-transform for the processed instantaneous spectrum, the signal in the time domain was obtained again with a more balanced spectrum and broader frequency band. The quality of seismic data was improved without additional noise.展开更多
The induced temperature, displacement, and stress fields in an infinite nonhomogeneous elastic medium having a spherical cavity are obtained in the context dual-phase-lag model. The surface of the cavity is stress fre...The induced temperature, displacement, and stress fields in an infinite nonhomogeneous elastic medium having a spherical cavity are obtained in the context dual-phase-lag model. The surface of the cavity is stress free and is subjected to a thermal shock. The material is elastic and has an in?homogeneity in the radial direction. The type of non homogeneity is such that the elastic constants, thermal conductivity and density are propor?tional to the nth power of the radial distance. The solutions are obtained analytically employing the Laplace transform technique. The numerical inversion of the transforms is carried out using Fourier series expansions. The stresses, temperature and displacement are computed and presented graphically. A comparison of the results for different theories is presented.展开更多
Present paper deals a M/M/1:(∞;GD) queueing model with interdependent controllable arrival and service rates where- in customers arrive in the system according to poisson distribution with two different arrivals rate...Present paper deals a M/M/1:(∞;GD) queueing model with interdependent controllable arrival and service rates where- in customers arrive in the system according to poisson distribution with two different arrivals rates-slower and faster as per controllable arrival policy. Keeping in view the general trend of interdependent arrival and service processes, it is presumed that random variables of arrival and service processes follow a bivariate poisson distribution and the server provides his services under general discipline of service rule in an infinitely large waiting space. In this paper, our central attention is to explore the probability generating functions using Rouche’s theorem in both cases of slower and faster arrival rates of the queueing model taken into consideration;which may be helpful for mathematicians and researchers for establishing significant performance measures of the model. Moreover, for the purpose of high-lighting the application aspect of our investigated result, very recently Maurya [1] has derived successfully the expected busy periods of the server in both cases of slower and faster arrival rates, which have also been presented by the end of this paper.展开更多
Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement...Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.展开更多
The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them...The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets,while the downloaded models may suffer backdoor attacks.Different from previous attacks aiming at a target task,we show that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information.Attackers can restrict the output representations(the values of output neurons)of trigger-embedded samples to arbitrary predefined values through additional training,namely neuron-level backdoor attack(NeuBA).Since fine-tuning has little effect on model parameters,the fine-tuned model will retain the backdoor functionality and predict a specific label for the samples embedded with the same trigger.To provoke multiple labels in a specific task,attackers can introduce several triggers with predefined contrastive values.In the experiments of both natural language processing(NLP)and computer vision(CV),we show that NeuBA can well control the predictions for trigger-embedded instances with different trigger designs.Our findings sound a red alarm for the wide use of pre-trained models.Finally,we apply several defense methods to NeuBA and find that model pruning is a promising technique to resist NeuBA by omitting backdoored neurons.展开更多
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications.
文摘模型可以生成符合用户偏好的摘要.之前的摘要模型侧重于单独控制某个属性,而不是多个属性的组合.传统的Seq2Seq多属性可控文本摘要模型在满足多个控制属性时,存在无法整合所有控制属性、无法准确再现文本中关键信息和无法处理单词表外单词等问题.为此,本文提出了一种基于扩展Transformer和指针生成网络(pointer generator network,PGN)的模型.模型中的扩展Transformer将Transformer单编码器-单解码器的模型形式扩展成具有双重文本语义信息提取的双编码器和单个可融合指导信号特征的解码器形式.然后利用指针生成网络模型选择从源文本中复制单词或利用词汇表生成新的摘要信息,以解决摘要任务中常出现的OOV(out of vocabulary)问题.此外,为高效完成位置信息编码,模型在注意力层中使用相对位置表示来引入文本的序列信息.模型可以用于控制摘要的许多重要属性,包括长度、主题和具体性等.通过在公开数据集MACSum上的实验表明,相较以往方法,本文提出的模型在确保摘要质量的同时,更加符合用户给定的属性要求.
基金supported in part by the Office of Research Administration(ORA),King Abdullah University of Science and Technology(KAUST),Saudi Arabia(Grant Nos.FCC/1/1976-44-01,FCC/1/1976-45-01,REI/1/5234-01-01,and URF/1/4352-01-01)the National Natural Science Foundation of China(Grant No.22273107).
文摘Antibody leads must fulfill multiple desirable properties to be clinical candidates.Primarily due to the low throughput in the experimental procedure,the need for such multiproperty optimization causes the bottleneck in preclinical antibody discovery and development,because addressing one issue usually causes another.We developed a reinforcement learning(RL)method,named AB-Gen,for antibody library design using a generative pre-trained transformer(GPT)as the policy network of the RL agent.We showed that this model can learn the antibody space of heavy chain complementarity determining region 3(CDRH3)and generate sequences with similar property distributions.Besides,when using human epidermal growth factor receptor-2(HER2)as the target,the agent model of AB-Gen was able to generate novel CDRH3 sequences that fulfill multi-property constraints.Totally,509 generated sequences were able to pass all property filters,and three highly conserved residues were identified.The importance of these residues was further demonstrated by molecular dynamics simulations,consolidating that the agent model was capable of grasping important information in this complex optimization task.Overall,the ABGen method is able to design novel antibody sequences with an improved success rate than the traditional propose-then-filter approach.It has the potential to be used in practical antibody design,thus empowering the antibody discovery and development process.The source code of AB-Gen is freely available at Zenodo(https://doi.org/10.5281/zenodo.7657016)and BioCode(https://ngdc.cncb.ac.cn/biocode/tools/BT007341).
基金National Natural Science Foundation of China(Grant Nos.62171297 and 61931013).
文摘Objective Appropriate medical imaging is important for value-based care.We aim to evaluate the performance of generative pretrained transformer 4(GPT-4),an innovative natural language processing model,providing appropriate medical imaging automatically in different clinical scenarios.Methods Institutional Review Boards(IRB)approval was not required due to the use of nonidentifiable data.Instead,we used 112 questions from the American College of Radiology(ACR)Radiology-TEACHES Program as prompts,which is an open-sourced question and answer program to guide appropriate medical imaging.We included 69 free-text case vignettes and 43 simplified cases.For the performance evaluation of GPT-4 and GPT-3.5,we considered the recommendations of ACR guidelines as the gold standard,and then three radiologists analyzed the consistency of the responses from the GPT models with those of the ACR.We set a five-score criterion for the evaluation of the consistency.A paired t-test was applied to assess the statistical significance of the findings.Results For the performance of the GPT models in free-text case vignettes,the accuracy of GPT-4 was 92.9%,whereas the accuracy of GPT-3.5 was just 78.3%.GPT-4 can provide more appropriate suggestions to reduce the overutilization of medical imaging than GPT-3.5(t=3.429,P=0.001).For the performance of the GPT models in simplified scenarios,the accuracy of GPT-4 and GPT-3.5 was 66.5%and 60.0%,respectively.The differences were not statistically significant(t=1.858,P=0.070).GPT-4 was characterized by longer reaction times(27.1 s in average)and extensive responses(137.1 words on average)than GPT-3.5.Conclusion As an advanced tool for improving value-based healthcare in clinics,GPT-4 may guide appropriate medical imaging accurately and efficiently。
文摘Multimodal sentence summarization(MMSS)is a new yet challenging task that aims to generate a concise summary of a long sentence and its corresponding image.Although existing methods have gained promising success in MMSS,they overlook the powerful generation ability of generative pre-trained language models(GPLMs),which have shown to be effective in many text generation tasks.To fill this research gap,we propose to using GPLMs to promote the performance of MMSS.Notably,adopting GPLMs to solve MMSS inevitably faces two challenges:1)What fusion strategy should we use to inject visual information into GPLMs properly?2)How to keep the GPLM′s generation ability intact to the utmost extent when the visual feature is injected into the GPLM.To address these two challenges,we propose a vision enhanced generative pre-trained language model for MMSS,dubbed as Vision-GPLM.In Vision-GPLM,we obtain features of visual and textual modalities with two separate encoders and utilize a text decoder to produce a summary.In particular,we utilize multi-head attention to fuse the features extracted from visual and textual modalities to inject the visual feature into the GPLM.Meanwhile,we train Vision-GPLM in two stages:the vision-oriented pre-training stage and fine-tuning stage.In the vision-oriented pre-training stage,we particularly train the visual encoder by the masked language model task while the other components are frozen,aiming to obtain homogeneous representations of text and image.In the fine-tuning stage,we train all the components of Vision-GPLM by the MMSS task.Extensive experiments on a public MMSS dataset verify the superiority of our model over existing baselines.
文摘数据到文本的生成是指从结构化数据生成连贯文本的一种自然语言处理方法。近年来,由于端到端训练的深度神经网络的应用,数据到文本生成的方法显示出了巨大潜力。该方法能够处理大量数据自动生成连贯性文本,常用于新闻写作、报告生成等场景。然而,已有研究中对于数据中具体数值、时间等数据信息的推理存在较大缺陷,无法充分利用数据间的结构信息给出合理的生成指引,并且生成过程容易出现语义与句法分离训练的问题。因此,文中提出一种结合Transformer模型与深度神经网络的数据到文本生成方法,并提出一个用于内容规划的Transformer Text Planning(TTP)算法,有效地解决上述问题。在Rotowire公开数据集上进行方法验证,实验结果表明,文中方法性能优于已有数据到文本生成模型,可直接应用于结构化数据到连贯性文本的生成任务中,具有一定的实际应用价值。
基金supported by the National Natural Science Foundation of China(Nos.10972151,11272227)the Innovation Program for Scientific Research of Nanjing University of Science and Technology
文摘With the action of small perturbation on generalized El-Nabulsi-Birkhoff fractional equations,the perturbation to Noether symmetries and adiabatic invariants are studied under the framework of El-Nabulsi′s fractional model.Firstly,based on the invariance of El-Nabulsi-Pfaff action under the infinitesimal transformations of group,the exact invariants are given.Secondly,on the basis of the definition of higher order adiabatic invariants of a dynamical system,the adiabatic invariants of the Noether symmetric perturbation for disturbed generalized El-Nabulsi′s fractional Birkhoff system are presented under some conditions,and some special cases are discussed.Finally,an example known as Hojman-Urrutia problem is given to illustrate the application of the results.
基金supported by National 973 Key Basic Research Development Program(No.2007CB209602)National 863 High Technology Research Development Program (No.2007AA067.229)
文摘In this paper, we analyze the seismic signal in the time-frequency domain using the generalized S-transform combined with spectrum modeling. Without assuming that the reflection coefficients are random white noise as in the conventional resolution-enhanced techniques, the wavelet which changes with time and frequency was simulated and eliminated. After using the inverse S-transform for the processed instantaneous spectrum, the signal in the time domain was obtained again with a more balanced spectrum and broader frequency band. The quality of seismic data was improved without additional noise.
文摘The induced temperature, displacement, and stress fields in an infinite nonhomogeneous elastic medium having a spherical cavity are obtained in the context dual-phase-lag model. The surface of the cavity is stress free and is subjected to a thermal shock. The material is elastic and has an in?homogeneity in the radial direction. The type of non homogeneity is such that the elastic constants, thermal conductivity and density are propor?tional to the nth power of the radial distance. The solutions are obtained analytically employing the Laplace transform technique. The numerical inversion of the transforms is carried out using Fourier series expansions. The stresses, temperature and displacement are computed and presented graphically. A comparison of the results for different theories is presented.
文摘Present paper deals a M/M/1:(∞;GD) queueing model with interdependent controllable arrival and service rates where- in customers arrive in the system according to poisson distribution with two different arrivals rates-slower and faster as per controllable arrival policy. Keeping in view the general trend of interdependent arrival and service processes, it is presumed that random variables of arrival and service processes follow a bivariate poisson distribution and the server provides his services under general discipline of service rule in an infinitely large waiting space. In this paper, our central attention is to explore the probability generating functions using Rouche’s theorem in both cases of slower and faster arrival rates of the queueing model taken into consideration;which may be helpful for mathematicians and researchers for establishing significant performance measures of the model. Moreover, for the purpose of high-lighting the application aspect of our investigated result, very recently Maurya [1] has derived successfully the expected busy periods of the server in both cases of slower and faster arrival rates, which have also been presented by the end of this paper.
基金Linghui Meng was supported in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA27030300)Haifeng Zhang was supported in part by the National Natural Science Foundation of China(No.62206289).
文摘Offline reinforcement learning leverages previously collected offline datasets to learn optimal policies with no necessity to access the real environment.Such a paradigm is also desirable for multi-agent reinforcement learning(MARL)tasks,given the combinatorially increased interactions among agents and with the environment.However,in MARL,the paradigm of offline pre-training with online fine-tuning has not been studied,nor even datasets or benchmarks for offline MARL research are available.In this paper,we facilitate the research by providing large-scale datasets and using them to examine the usage of the decision transformer in the context of MARL.We investigate the generalization of MARL offline pre-training in the following three aspects:1)between single agents and multiple agents,2)from offline pretraining to online fine tuning,and 3)to that of multiple downstream tasks with few-shot and zero-shot capabilities.We start by introducing the first offline MARL dataset with diverse quality levels based on the StarCraftII environment,and then propose the novel architecture of multi-agent decision transformer(MADT)for effective offline learning.MADT leverages the transformer′s modelling ability for sequence modelling and integrates it seamlessly with both offline and online MARL tasks.A significant benefit of MADT is that it learns generalizable policies that can transfer between different types of agents under different task scenarios.On the StarCraft II offline dataset,MADT outperforms the state-of-the-art offline reinforcement learning(RL)baselines,including BCQ and CQL.When applied to online tasks,the pre-trained MADT significantly improves sample efficiency and enjoys strong performance in both few-short and zero-shot cases.To the best of our knowledge,this is the first work that studies and demonstrates the effectiveness of offline pre-trained models in terms of sample efficiency and generalizability enhancements for MARL.
基金supported by the National Key Research and Development Program of China(No.2020AAA0106500)the National Natural Science Foundation of China(NSFC No.62236004).
文摘The pre-training-then-fine-tuning paradigm has been widely used in deep learning.Due to the huge computation cost for pre-training,practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets,while the downloaded models may suffer backdoor attacks.Different from previous attacks aiming at a target task,we show that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information.Attackers can restrict the output representations(the values of output neurons)of trigger-embedded samples to arbitrary predefined values through additional training,namely neuron-level backdoor attack(NeuBA).Since fine-tuning has little effect on model parameters,the fine-tuned model will retain the backdoor functionality and predict a specific label for the samples embedded with the same trigger.To provoke multiple labels in a specific task,attackers can introduce several triggers with predefined contrastive values.In the experiments of both natural language processing(NLP)and computer vision(CV),we show that NeuBA can well control the predictions for trigger-embedded instances with different trigger designs.Our findings sound a red alarm for the wide use of pre-trained models.Finally,we apply several defense methods to NeuBA and find that model pruning is a promising technique to resist NeuBA by omitting backdoored neurons.