针对主流Transformer网络仅对输入像素块做自注意力计算而忽略了不同像素块间的信息交互,以及输入尺度单一导致局部特征细节模糊的问题,本文提出一种基于Transformer并用于处理视觉任务的主干网络ConvFormer. ConvFormer通过所设计的多...针对主流Transformer网络仅对输入像素块做自注意力计算而忽略了不同像素块间的信息交互,以及输入尺度单一导致局部特征细节模糊的问题,本文提出一种基于Transformer并用于处理视觉任务的主干网络ConvFormer. ConvFormer通过所设计的多尺度混洗自注意力模块(Channel-Shuffle and Multi-Scale attention,CSMS)和动态相对位置编码模块(Dynamic Relative Position Coding,DRPC)来聚合多尺度像素块间的语义信息,并在前馈网络中引入深度卷积提高网络的局部建模能力.在公开数据集ImageNet-1K,COCO 2017和ADE20K上分别进行图像分类、目标检测和语义分割实验,ConvFormer-Tiny与不同视觉任务中同量级最优网络RetNetY-4G,Swin-Tiny和ResNet50对比,精度分别提高0.3%,1.4%和0.5%.展开更多
鉴于Transformer的Self-Attention机制具有优秀的表征能力,许多研究者提出了基于Self-Attention机制的图像处理模型,并取得了巨大成功。然而,基于Self-Attention的传统图像分类网络无法兼顾全局信息和计算复杂度,限制了Self-Attention...鉴于Transformer的Self-Attention机制具有优秀的表征能力,许多研究者提出了基于Self-Attention机制的图像处理模型,并取得了巨大成功。然而,基于Self-Attention的传统图像分类网络无法兼顾全局信息和计算复杂度,限制了Self-Attention的广泛应用。文中提出了一种有效的、可扩展的注意力模块Local Neighbor Global Self-Attention(LNG-SA),该模块在任意时期都能进行局部信息、邻居信息和全局信息的交互。通过重复级联LNG-SA模块,设计了一个全新的网络,称为LNG-Transformer。该网络整体采用层次化结构,具有优秀的灵活性,其计算复杂度与图像分辨率呈线性关系。LNG-SA模块的特性使得LNG-Transformer即使在早期的高分辨率阶段,也可以进行局部信息、邻居信息和全局信息的交互,从而带来更高的效率、更强的学习能力。实验结果表明,LNG-Transformer在图像分类任务中具有良好的性能。展开更多
Wavelet transform is used to analyze the scaling rule of temperature data (passive scalar) in Rayleigh Bénard convection flow from two aspects. By utilizing the method of extended self similarity (ESS), one can f...Wavelet transform is used to analyze the scaling rule of temperature data (passive scalar) in Rayleigh Bénard convection flow from two aspects. By utilizing the method of extended self similarity (ESS), one can find the obtained scaling exponent agrees well with the one obtained from the temperature data in a experiment of wind tunnel. And then we propose a newly defined formula based on wavelet transform, and can determine the scaling exponent ξ(q) of temperature data. The obtained results demonstrate that we can correctly extract ξ(q) by using the method which is named as wavelet transform maximum modulus (WTMM).展开更多
Wavetet transform was used to analyze the scaling law of temperature data (passive scalar) in Rayleigh-Bénard convection flow from two aspects. The first one was to utilize the method of extended self similarity,...Wavetet transform was used to analyze the scaling law of temperature data (passive scalar) in Rayleigh-Bénard convection flow from two aspects. The first one was to utilize the method of extended self similarity, presented first by Benzi et al., to study the scaling exponent of temperature data. The obtained results show that the inertial range is much wider than that one determined directly from the conventional structure function, and find the obtained scaling exponent agrees well with the one obtained from the temperature data in an experiment of wind tunnel. The second one was that, by extending the formula which was proposed by A. Arneodo et al. for extracting the scaling exponent ζ(q) of velocity data to temperature data, a newly defined formula which is also based on wavelet transform, and can determine the scaling exponent ξ(q) of temperature data was proposed. The obtained results demonstrate that by using the method which is named as WTMM (wavelet transform maximum modulus) ξ(q) correctly can be extracted.展开更多
文摘针对主流Transformer网络仅对输入像素块做自注意力计算而忽略了不同像素块间的信息交互,以及输入尺度单一导致局部特征细节模糊的问题,本文提出一种基于Transformer并用于处理视觉任务的主干网络ConvFormer. ConvFormer通过所设计的多尺度混洗自注意力模块(Channel-Shuffle and Multi-Scale attention,CSMS)和动态相对位置编码模块(Dynamic Relative Position Coding,DRPC)来聚合多尺度像素块间的语义信息,并在前馈网络中引入深度卷积提高网络的局部建模能力.在公开数据集ImageNet-1K,COCO 2017和ADE20K上分别进行图像分类、目标检测和语义分割实验,ConvFormer-Tiny与不同视觉任务中同量级最优网络RetNetY-4G,Swin-Tiny和ResNet50对比,精度分别提高0.3%,1.4%和0.5%.
文摘鉴于Transformer的Self-Attention机制具有优秀的表征能力,许多研究者提出了基于Self-Attention机制的图像处理模型,并取得了巨大成功。然而,基于Self-Attention的传统图像分类网络无法兼顾全局信息和计算复杂度,限制了Self-Attention的广泛应用。文中提出了一种有效的、可扩展的注意力模块Local Neighbor Global Self-Attention(LNG-SA),该模块在任意时期都能进行局部信息、邻居信息和全局信息的交互。通过重复级联LNG-SA模块,设计了一个全新的网络,称为LNG-Transformer。该网络整体采用层次化结构,具有优秀的灵活性,其计算复杂度与图像分辨率呈线性关系。LNG-SA模块的特性使得LNG-Transformer即使在早期的高分辨率阶段,也可以进行局部信息、邻居信息和全局信息的交互,从而带来更高的效率、更强的学习能力。实验结果表明,LNG-Transformer在图像分类任务中具有良好的性能。
文摘Wavelet transform is used to analyze the scaling rule of temperature data (passive scalar) in Rayleigh Bénard convection flow from two aspects. By utilizing the method of extended self similarity (ESS), one can find the obtained scaling exponent agrees well with the one obtained from the temperature data in a experiment of wind tunnel. And then we propose a newly defined formula based on wavelet transform, and can determine the scaling exponent ξ(q) of temperature data. The obtained results demonstrate that we can correctly extract ξ(q) by using the method which is named as wavelet transform maximum modulus (WTMM).
文摘Wavetet transform was used to analyze the scaling law of temperature data (passive scalar) in Rayleigh-Bénard convection flow from two aspects. The first one was to utilize the method of extended self similarity, presented first by Benzi et al., to study the scaling exponent of temperature data. The obtained results show that the inertial range is much wider than that one determined directly from the conventional structure function, and find the obtained scaling exponent agrees well with the one obtained from the temperature data in an experiment of wind tunnel. The second one was that, by extending the formula which was proposed by A. Arneodo et al. for extracting the scaling exponent ζ(q) of velocity data to temperature data, a newly defined formula which is also based on wavelet transform, and can determine the scaling exponent ξ(q) of temperature data was proposed. The obtained results demonstrate that by using the method which is named as WTMM (wavelet transform maximum modulus) ξ(q) correctly can be extracted.