In this study,we aimto investigate certain triple integral transformand its application to a class of partial differentialequations.We discuss various properties of the new transformincluding inversion, linearity, exi...In this study,we aimto investigate certain triple integral transformand its application to a class of partial differentialequations.We discuss various properties of the new transformincluding inversion, linearity, existence, scaling andshifting, etc. Then,we derive several results enfolding partial derivatives and establish amulti-convolution theorem.Further, we apply the aforementioned transform to some classical functions and many types of partial differentialequations involving heat equations,wave equations, Laplace equations, and Poisson equations aswell.Moreover,wedraw some figures to illustrate 3-D contour plots for exact solutions of some selected examples involving differentvalues in their variables.展开更多
Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by ut...Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.展开更多
为了解决飞机目标机动数据集缺失的问题,文章利用运动学建模生成了丰富的轨迹数据集,为网络训练提供了必要的数据支持。针对现阶段轨迹预测运动学模型建立困难及时序预测方法难以提取时空特征的问题,提出了一种结合Transformer编码器和...为了解决飞机目标机动数据集缺失的问题,文章利用运动学建模生成了丰富的轨迹数据集,为网络训练提供了必要的数据支持。针对现阶段轨迹预测运动学模型建立困难及时序预测方法难以提取时空特征的问题,提出了一种结合Transformer编码器和长短期记忆网络(Long Short Term Memory,LSTM)的飞机目标轨迹预测方法,即Transformer-Encoder-LSTM模型。新模型可同时提供LSTM和Transformer编码器模块的补充历史信息和基于注意力的信息表示,提高了模型能力。通过与一些经典神经网络模型进行对比分析,发现在数据集上,新方法的平均位移误差减小到0.22,显著优于CNN-LSTMAttention模型的0.35。相比其他网络,该算法能够提取复杂轨迹中的隐藏特征,在面对飞机连续转弯、大机动转弯的复杂轨迹时,能够保证模型的鲁棒性,提升了对于复杂轨迹预测的准确性。展开更多
文摘In this study,we aimto investigate certain triple integral transformand its application to a class of partial differentialequations.We discuss various properties of the new transformincluding inversion, linearity, existence, scaling andshifting, etc. Then,we derive several results enfolding partial derivatives and establish amulti-convolution theorem.Further, we apply the aforementioned transform to some classical functions and many types of partial differentialequations involving heat equations,wave equations, Laplace equations, and Poisson equations aswell.Moreover,wedraw some figures to illustrate 3-D contour plots for exact solutions of some selected examples involving differentvalues in their variables.
基金supported in part by the National Natural Science Foundation of China under Grants 61502162,61702175,and 61772184in part by the Fund of the State Key Laboratory of Geo-information Engineering under Grant SKLGIE2016-M-4-2+4 种基金in part by the Hunan Natural Science Foundation of China under Grant 2018JJ2059in part by the Key R&D Project of Hunan Province of China under Grant 2018GK2014in part by the Open Fund of the State Key Laboratory of Integrated Services Networks under Grant ISN17-14Chinese Scholarship Council(CSC)through College of Computer Science and Electronic Engineering,Changsha,410082Hunan University with Grant CSC No.2018GXZ020784.
文摘Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field.
文摘为了解决飞机目标机动数据集缺失的问题,文章利用运动学建模生成了丰富的轨迹数据集,为网络训练提供了必要的数据支持。针对现阶段轨迹预测运动学模型建立困难及时序预测方法难以提取时空特征的问题,提出了一种结合Transformer编码器和长短期记忆网络(Long Short Term Memory,LSTM)的飞机目标轨迹预测方法,即Transformer-Encoder-LSTM模型。新模型可同时提供LSTM和Transformer编码器模块的补充历史信息和基于注意力的信息表示,提高了模型能力。通过与一些经典神经网络模型进行对比分析,发现在数据集上,新方法的平均位移误差减小到0.22,显著优于CNN-LSTMAttention模型的0.35。相比其他网络,该算法能够提取复杂轨迹中的隐藏特征,在面对飞机连续转弯、大机动转弯的复杂轨迹时,能够保证模型的鲁棒性,提升了对于复杂轨迹预测的准确性。