期刊文献+
共找到409,853篇文章
< 1 2 250 >
每页显示 20 50 100
基于Depth-wise卷积和视觉Transformer的图像分类模型
1
作者 张峰 黄仕鑫 +1 位作者 花强 董春茹 《计算机科学》 CSCD 北大核心 2024年第2期196-204,共9页
图像分类作为一种常见的视觉识别任务,有着广阔的应用场景。在处理图像分类问题时,传统的方法通常使用卷积神经网络,然而,卷积网络的感受野有限,难以建模图像的全局关系表示,导致分类精度低,难以处理复杂多样的图像数据。为了对全局关... 图像分类作为一种常见的视觉识别任务,有着广阔的应用场景。在处理图像分类问题时,传统的方法通常使用卷积神经网络,然而,卷积网络的感受野有限,难以建模图像的全局关系表示,导致分类精度低,难以处理复杂多样的图像数据。为了对全局关系进行建模,一些研究者将Transformer应用于图像分类任务,但为了满足Transformer的序列化和并行化要求,需要将图像分割成大小相等、互不重叠的图像块,破坏了相邻图像数据块之间的局部信息。此外,由于Transformer具有较少的先验知识,模型往往需要在大规模数据集上进行预训练,因此计算复杂度较高。为了同时建模图像相邻块之间的局部信息并充分利用图像的全局信息,提出了一种基于Depth-wise卷积的视觉Transformer(Efficient Pyramid Vision Transformer,EPVT)模型。EPVT模型可以实现以较低的计算成本提取相邻图像块之间的局部和全局信息。EPVT模型主要包含3个关键组件:局部感知模块(Local Perceptron Module,LPM)、空间信息融合模块(Spatial Information Fusion,SIF)和“+卷积前馈神经网络(Convolution Feed-forward Network,CFFN)。LPM模块用于捕获图像的局部相关性;SIF模块用于融合相邻图像块之间的局部信息,并利用不同图像块之间的远距离依赖关系,提升模型的特征表达能力,使模型学习到输出特征在不同维度下的语义信息;CFFN模块用于编码位置信息和重塑张量。在图像分类数据集ImageNet-1K上,所提模型优于现有的同等规模的视觉Transformer分类模型,取得了82.6%的分类准确度,证明了该模型在大规模数据集上具有竞争力。 展开更多
关键词 深度学习 图像分类 depth-wise卷积 视觉transformer 注意力机制
下载PDF
Dual-Path Vision Transformer用于急性缺血性脑卒中辅助诊断
2
作者 张桃红 郭学强 +4 位作者 郑瀚 罗继昌 王韬 焦力群 唐安莹 《电子科技大学学报》 EI CAS CSCD 北大核心 2024年第2期307-314,共8页
急性缺血性脑卒中是由于脑组织血液供应障碍导致的脑功能障碍,数字减影脑血管造影(DSA)是诊断脑血管疾病的金标准。基于患者的正面和侧面DSA图像,对急性缺血性脑卒中的治疗效果进行分级评估,构建基于Vision Transformer的双路径图像分... 急性缺血性脑卒中是由于脑组织血液供应障碍导致的脑功能障碍,数字减影脑血管造影(DSA)是诊断脑血管疾病的金标准。基于患者的正面和侧面DSA图像,对急性缺血性脑卒中的治疗效果进行分级评估,构建基于Vision Transformer的双路径图像分类智能模型DPVF。为了提高辅助诊断速度,基于EdgeViT的轻量化设计思想进行了模型的构建;为了使模型保持轻量化的同时具有较高的精度,提出空间-通道自注意力模块,促进Transformer模型捕获更全面的特征信息,提高模型的表达能力;此外,对于DPVF的两分支的特征融合,构建交叉注意力模块对两分支输出进行交叉融合,促使模型提取更丰富的特征,从而提高模型表现。实验结果显示DPVF在测试集上的准确率达98.5%,满足实际需求。 展开更多
关键词 急性缺血性脑卒中 视觉transformer 双分支网络 特征融合
下载PDF
CNN-Transformer结合对比学习的高光谱与LiDAR数据协同分类
3
作者 吴海滨 戴诗语 +2 位作者 王爱丽 岩堀祐之 于效宇 《光学精密工程》 EI CAS CSCD 北大核心 2024年第7期1087-1100,共14页
针对高光谱图像(hyperspectral images,HSI)与LiDAR数据多模态分类任务中的跨模态信息表达和特征对齐等问题,提出一种基于对比学习CNN-Transformer高光谱和LiDAR数据协同分类网络(Contrastive Learning based CNNTransformer Network,CL... 针对高光谱图像(hyperspectral images,HSI)与LiDAR数据多模态分类任务中的跨模态信息表达和特征对齐等问题,提出一种基于对比学习CNN-Transformer高光谱和LiDAR数据协同分类网络(Contrastive Learning based CNNTransformer Network,CLCT-Net)。CLCT-Net通过由ConvNeXt V2 Block构成的共有特征提取模块,获得不同模态间的共性特征,解决异构传感器数据之间语义对齐的问题。构建了包含空间-通道分支和光谱上下文分支的双分支HSI编码器,以及结合频域自注意力机制的LiDAR编码器,以获取更丰富的特征表示。利用集成对比学习进行分类,进一步提升多模态数据协同分类的精度。在Houston 2013和Trento数据集上的实验结果表明,相较于其他高光谱图像和Li‐DAR数据分类模型,本文所提模型获得了更高的地物分类精度,分别达到了92.01%和98.90%,实现了跨模态数据特征的深度挖掘和协同提取。 展开更多
关键词 高光谱图像 激光雷达数据 transformer 卷积神经网络 对比学习
下载PDF
基于Transformer和动态3D卷积的多源遥感图像分类
4
作者 高峰 孟德森 +2 位作者 解正源 亓林 董军宇 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2024年第2期606-614,共9页
多源遥感数据具有互补性和协同性,近年来,基于深度学习的方法已经在多源遥感图像分类中取得了一定进展,但当前方法仍面临关键难题,如多源遥感图像特征表达不一致,融合困难,基于静态推理范式的神经网络缺乏对不同类别地物的适应性。为解... 多源遥感数据具有互补性和协同性,近年来,基于深度学习的方法已经在多源遥感图像分类中取得了一定进展,但当前方法仍面临关键难题,如多源遥感图像特征表达不一致,融合困难,基于静态推理范式的神经网络缺乏对不同类别地物的适应性。为解决上述问题,提出了基于跨模态Transformer和多尺度动态3D卷积的多源遥感图像分类模型。为提高多源特征表达的一致性,设计了基于Transformer的融合模块,借助其强大的注意力建模能力挖掘高光谱和LiDAR数据特征之间的相互作用;为提高特征提取方法对不同地物类别的适应性,设计了多尺度动态3D卷积模块,将输入特征的多尺度信息融入卷积核的调制,提高卷积操作对不同地物的适应性。采用多源遥感数据集Houston和Trento对所提方法进行验证,实验结果表明:所提方法在Houston和Trento数据集上总体准确率分别达到94.60%和98.21%,相比MGA-MFN等主流方法,总体准确率分别至少提升0.97%和0.25%,验证了所提方法可有效提升多源遥感图像分类的准确率。 展开更多
关键词 高光谱图像 激光雷达 transformer 多源特征融合 动态卷积
下载PDF
基于Transformer-GRU网络的4D航迹预测 被引量:1
5
作者 翟文鹏 宋一峤 张兆宁 《重庆交通大学学报(自然科学版)》 CAS CSCD 北大核心 2024年第6期94-101,共8页
航空器的4D航迹预测作为基于航迹运行(TBO)的关键技术之一具有非常重要的意义。基于Transformer-GRU(T-GRU)网络,提出了一种新的航迹预测方法,结合Adamax优化器实现了4D航迹预测。利用Transformer网络的自注意力机制对输入序列进行建模... 航空器的4D航迹预测作为基于航迹运行(TBO)的关键技术之一具有非常重要的意义。基于Transformer-GRU(T-GRU)网络,提出了一种新的航迹预测方法,结合Adamax优化器实现了4D航迹预测。利用Transformer网络的自注意力机制对输入序列进行建模,通过GRU网络获取时序数据的特征;对原始航迹数据进行重采样插值和中值滤波等预处理,以便消除数据缺失和异常值等对预测的影响;通过E E、E AT、E CT、E A等误差指标对实验结果进行评价,并与其他常用的航迹预测方法进行对比。研究结果表明:与传统深度学习方法相比,基于T-GRU网络的4D航迹预测模型在航迹预测中具有更高的准确性和鲁棒性。 展开更多
关键词 交通工程 空中交通管理 TBO 4d航迹预测 深度学习
下载PDF
基于TF-IDF和多头注意力Transformer模型的文本情感分析 被引量:2
6
作者 高佳希 黄海燕 《华东理工大学学报(自然科学版)》 CAS CSCD 北大核心 2024年第1期129-136,共8页
文本情感分析旨在对带有情感色彩的主观性文本进行分析、处理、归纳和推理,是自然语言处理中一项重要任务。针对现有的计算方法不能充分处理复杂度和混淆度较高的文本数据集的问题,提出了一种基于TF-IDF(Term Frequency-Inverse Documen... 文本情感分析旨在对带有情感色彩的主观性文本进行分析、处理、归纳和推理,是自然语言处理中一项重要任务。针对现有的计算方法不能充分处理复杂度和混淆度较高的文本数据集的问题,提出了一种基于TF-IDF(Term Frequency-Inverse Document Frequency)和多头注意力Transformer模型的文本情感分析模型。在文本预处理阶段,利用TF-IDF算法对影响文本情感倾向较大的词语进行初步筛选,舍去常见的停用词及其他文本所属邻域对文本情感倾向影响较小的专有名词。然后,利用多头注意力Transformer模型编码器进行特征提取,抓取文本内部重要的语义信息,提高模型对语义的分析和泛化能力。该模型在多领域、多类型评论语料库数据集上取得了98.17%的准确率。 展开更多
关键词 文本情感分析 自然语言处理 多头注意力机制 TF-IdF算法 transformer模型
下载PDF
FMA-DETR:一种无编码器的Transformer目标检测方法
7
作者 周全 倪英豪 +2 位作者 莫玉玮 康彬 张索非 《信号处理》 CSCD 北大核心 2024年第6期1160-1170,共11页
DETR是第一个将Transformer应用于目标检测的视觉模型。在DETR结构中,Transformer编码器对已高度编码的图像特征进行再编码,这在一定程度上导致了网络功能的重复。此外,由于Transformer编码器具有多层深度堆叠的结构和巨大的参数量,导... DETR是第一个将Transformer应用于目标检测的视觉模型。在DETR结构中,Transformer编码器对已高度编码的图像特征进行再编码,这在一定程度上导致了网络功能的重复。此外,由于Transformer编码器具有多层深度堆叠的结构和巨大的参数量,导致网络优化变得困难,模型收敛速度缓慢。本文设计了一种无编码器的Transformer目标检测网络模型。由于不需要引入Transformer编码器,本文的模型比DETR参数量更小、计算量更低、模型收敛速度更快。但是,直接去除Transformer编码器将降低网络的表达能力,导致Transformer解码器无法从数量庞大的图像特征中关注到包含目标的图像特征,从而使检测性能大幅降低。为了缓解这个问题,本文提出了一种混合特征注意力(fusion-feature mixing attention,FMA)机制,它通过自适应特征混合和通道交叉注意力弥补检测网络特征表达能力的下降,将其应用于Transformer解码器可以减轻由于去除Transformer编码器带来的性能降低。在MS-COCO数据集上,本文网络模型(称为FMA-DETR)实现了与DETR相近的性能表现,同时本文的模型拥有更快的收敛速度、更小的参数量以及更低的计算量。本文还进行了大量消融实验来验证所提出方法的有效性。 展开更多
关键词 目标检测 transformer 编码器 dETR 混合注意力
下载PDF
基于Transformer的DC/DC板级验证状态识别
8
作者 于海波 李杰 +2 位作者 胡陈君 夏俊辉 张伟 《集成电路与嵌入式系统》 2024年第5期94-100,共7页
为满足航天产品的高精度、高可靠性需求,实现元器件自主可控、芯片国产化及应用适应性验证十分必要,设计一种基于FPGA的国产DC/DC板级综合测试平台。在长时间的热学环境适应性板级验证项目中,为实现DC/DC器件应用板卡工作状态的实时监测... 为满足航天产品的高精度、高可靠性需求,实现元器件自主可控、芯片国产化及应用适应性验证十分必要,设计一种基于FPGA的国产DC/DC板级综合测试平台。在长时间的热学环境适应性板级验证项目中,为实现DC/DC器件应用板卡工作状态的实时监测,提出一种基于Transformer的智能识别算法。分别使用空载、负载电流3 A、负载电流5 A、高输入电压、低输入电压、短路状态下的DC DC输出序列,输入到Transformer模型中并利用注意力机制提取各序列的全局注意力特征,并对深度学习模型进行训练。实验结果表明,对于此6种工作状态数据集,Transformer模型识别的准确率为99.2%,具备良好的分类和监测性能,具有一定的工程应用价值。 展开更多
关键词 FPGA 板级测试 状态识别 深度学习 transformer模型
下载PDF
DRT Net:面向特征增强的双残差Res-Transformer肺炎识别模型
9
作者 周涛 彭彩月 +3 位作者 杜玉虎 党培 刘凤珍 陆惠玲 《光学精密工程》 EI CAS CSCD 北大核心 2024年第5期714-726,共13页
针对肺部X射线图像的病灶区域较小、形状复杂,与正常组织间的边界模糊,使得肺炎图像中的病灶特征提取不充分的问题,提出了一个面向特征增强的双残差Res-Transformer肺炎识别模型,设计3种不同的特征增强策略对模型特征提取能力进行增强... 针对肺部X射线图像的病灶区域较小、形状复杂,与正常组织间的边界模糊,使得肺炎图像中的病灶特征提取不充分的问题,提出了一个面向特征增强的双残差Res-Transformer肺炎识别模型,设计3种不同的特征增强策略对模型特征提取能力进行增强。设计了组注意力双残差模块(GADRM),采用双残差结构进行高效的特征融合,将双残差结构与通道混洗、通道注意力、空间注意力结合,增强模型对于病灶区域特征的提取能力;在网络的高层采用全局局部特征提取模块(GLFEM),结合CNN和Transformer的优势使网络充分提取图像的全局和局部特征,获得高层语义信息的全局特征,进一步增强网络的语义特征提取能力;设计了跨层双注意力特征融合模块(CDAFFM),融合浅层网络的空间信息以及深层网络的通道信息,对网络提取到的跨层特征进行增强。为了验证本文模型的有效性,分别在COVID-19 CHEST X-RAY数据集上进行消融实验和对比实验。实验结果表明,本文所提出网络的准确率、精确率、召回率,F1值和AUC值分别为98.41%,94.42%,94.20%,94.26%和99.65%。DRT Net能够帮助放射科医生使用胸部X光片对肺炎进行诊断,具有重要的临床作用。 展开更多
关键词 肺炎识别 X射线图像 特征增强 双残差结构 transformer
下载PDF
特征注意力Transformer模块在3D唇语序列身份识别中的应用
10
作者 骈鑫洋 王瑜 张洁 《计算机工程与应用》 CSCD 北大核心 2024年第7期141-146,共6页
唇语行为是一种新兴起的生物特征识别技术,三维(three-dimensional,3D)唇语点云序列因包含真实嘴唇空间结构和运动信息,已成为个体身份识别的重要生物特征。但是,3D点云的无序与非结构化的特点导致时空特征的提取非常困难。为此,提出一... 唇语行为是一种新兴起的生物特征识别技术,三维(three-dimensional,3D)唇语点云序列因包含真实嘴唇空间结构和运动信息,已成为个体身份识别的重要生物特征。但是,3D点云的无序与非结构化的特点导致时空特征的提取非常困难。为此,提出一种深度学习网络模型,用于3D唇语序列身份识别。该网络采用四层改进的PointNet++作为网络骨干,以分层方式抽取特征,为了学习到更多包含身份信息的时空特征,设计一种动态唇特征注意力Transformer模块,连接于PointNet++网络每一层之后,可以学习到不同特征图之间的相关信息,有效捕捉视频序列不同帧的上下文信息。与其他注意力机制构建的Transformer相比,提出的Transformer模块具有较少的参数,在S3DFM-FP和S3DFM-VP数据集上进行的实验表明,提出网络模型在3D唇语点云序列的身份识别任务中效果显著,即使在不受姿态约束的S3DFM-VP数据集中也表现出良好的性能。 展开更多
关键词 说话人识别 transformer PointNet++ 三维唇语点云
下载PDF
基于DRSN融合Transformer编码器的轴承故障诊断方法研究
11
作者 陈松 陈文华 张文广 《自动化与仪表》 2024年第5期103-108,共6页
针对轴承故障在复杂工况环境中诊断准确率低和泛化性能弱的问题,提出了一种基于深度残差收缩网络(deep residual shrinkage network,DRSN)融合Transformer编码器的轴承故障诊断方法。首先,采用DRSN通过软阈值模块自动去掉振动信号中的... 针对轴承故障在复杂工况环境中诊断准确率低和泛化性能弱的问题,提出了一种基于深度残差收缩网络(deep residual shrinkage network,DRSN)融合Transformer编码器的轴承故障诊断方法。首先,采用DRSN通过软阈值模块自动去掉振动信号中的噪声信息,并使用注意力机制增强提取到的特征;然后,采用Transformer编码器来进一步解决振动信号中的长期依赖性问题;最后,利用Softmax函数实现多故障模式识别。在凯斯西储大学轴承数据集上通过不同噪声等级对提出的模型进行测试,实验结果表明,该方法实现了对轴承故障分类,强噪声环境下准确率更高,训练时间更快。 展开更多
关键词 故障诊断 轴承 深度残差收缩网络 transformer编码器
下载PDF
Depth-Guided Vision Transformer With Normalizing Flows for Monocular 3D Object Detection
12
作者 Cong Pan Junran Peng Zhaoxiang Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期673-689,共17页
Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t... Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts. 展开更多
关键词 Monocular 3d object detection normalizing flows Swin transformer
下载PDF
A Comprehensive Survey of Recent Transformers in Image,Video and Diffusion Models
13
作者 Dinh Phu Cuong Le Dong Wang Viet-Tuan Le 《Computers, Materials & Continua》 SCIE EI 2024年第7期37-60,共24页
Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by ut... Transformer models have emerged as dominant networks for various tasks in computer vision compared to Convolutional Neural Networks(CNNs).The transformers demonstrate the ability to model long-range dependencies by utilizing a self-attention mechanism.This study aims to provide a comprehensive survey of recent transformerbased approaches in image and video applications,as well as diffusion models.We begin by discussing existing surveys of vision transformers and comparing them to this work.Then,we review the main components of a vanilla transformer network,including the self-attention mechanism,feed-forward network,position encoding,etc.In the main part of this survey,we review recent transformer-based models in three categories:Transformer for downstream tasks,Vision Transformer for Generation,and Vision Transformer for Segmentation.We also provide a comprehensive overview of recent transformer models for video tasks and diffusion models.We compare the performance of various hierarchical transformer networks for multiple tasks on popular benchmark datasets.Finally,we explore some future research directions to further improve the field. 展开更多
关键词 transformer vision transformer self-attention hierarchical transformer diffusion models
下载PDF
Transformer-Based Cloud Detection Method for High-Resolution Remote Sensing Imagery
14
作者 Haotang Tan Song Sun +1 位作者 Tian Cheng Xiyuan Shu 《Computers, Materials & Continua》 SCIE EI 2024年第7期661-678,共18页
Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose ... Cloud detection from satellite and drone imagery is crucial for applications such as weather forecasting and environmentalmonitoring.Addressing the limitations of conventional convolutional neural networks,we propose an innovative transformer-based method.This method leverages transformers,which are adept at processing data sequences,to enhance cloud detection accuracy.Additionally,we introduce a Cyclic Refinement Architecture that improves the resolution and quality of feature extraction,thereby aiding in the retention of critical details often lost during cloud detection.Our extensive experimental validation shows that our approach significantly outperforms established models,excelling in high-resolution feature extraction and precise cloud segmentation.By integrating Positional Visual Transformers(PVT)with this architecture,our method advances high-resolution feature delineation and segmentation accuracy.Ultimately,our research offers a novel perspective for surmounting traditional challenges in cloud detection and contributes to the advancement of precise and dependable image analysis across various domains. 展开更多
关键词 CLOUd transformer image segmentation remotely sensed imagery pyramid vision transformer
下载PDF
基于Swin Transformer网络与Adapt-RandAugment数据增强方法的小肠胶囊内镜图像分类方法研究
15
作者 聂瑞 刘学思 +5 位作者 童飞 邓远阳 刘相花 杨利 张和华 段傲文 《医疗卫生装备》 CAS 2024年第6期9-16,共8页
目的:为提高小肠病变分类识别的准确性,提出一种基于Swin Transformer网络与Adapt-RandAugment数据增强方法的小肠胶囊内镜图像分类方法。方法:基于RandAugment数据增强子策略和增强小肠胶囊内镜图像时不丢失特征、不失真的原则提出Adap... 目的:为提高小肠病变分类识别的准确性,提出一种基于Swin Transformer网络与Adapt-RandAugment数据增强方法的小肠胶囊内镜图像分类方法。方法:基于RandAugment数据增强子策略和增强小肠胶囊内镜图像时不丢失特征、不失真的原则提出Adapt-RandAugment数据增强方法。在公开的小肠胶囊内镜图像Kvasir-Capsule数据集中,基于Swin Transformer网络,采用Adapt-RandAugment数据增强方法进行训练,以卷积神经网络ResNet152、DenseNet161为基准,验证Swin Transformer网络和Adapt-RandAugment数据增强方法组合对小肠胶囊内镜图像分类识别的性能。结果:提出的方法宏平均精度(macro average precision,MAC-PRE)、宏平均召回率(macro average recall,MAC-REC)、宏F1分数(macro average F1 score,MAC-F1-S)分别为0.3832、0.3148、0.2905,微平均精度(micro average precision,MIC-PRE)、微平均召回率(micro average recall,MIC-REC)、微平均F1分数(micro average F1 score,MIC-F1-S)均为0.7553,马修斯相关系数(Matthews correlation coefficient,MCC)为0.4523,均优于ResNet152和DenseNet161网络。结论:基于Swin Transformer网络与Adapt-RandAugment数据增强方法的小肠胶囊内镜图像分类方法具有较好的小肠胶囊内镜图像分类识别效果和较高的识别准确率。 展开更多
关键词 Swin transformer网络 Adapt-RandAugment 数据增强 胶囊内镜 图像分类 小肠病变
下载PDF
Price prediction of power transformer materials based on CEEMD and GRU
16
作者 Yan Huang Yufeng Hu +2 位作者 Liangzheng Wu Shangyong Wen Zhengdong Wan 《Global Energy Interconnection》 EI CSCD 2024年第2期217-227,共11页
The rapid growth of the Chinese economy has fueled the expansion of power grids.Power transformers are key equipment in power grid projects,and their price changes have a significant impact on cost control.However,the... The rapid growth of the Chinese economy has fueled the expansion of power grids.Power transformers are key equipment in power grid projects,and their price changes have a significant impact on cost control.However,the prices of power transformer materials manifest as nonsmooth and nonlinear sequences.Hence,estimating the acquisition costs of power grid projects is difficult,hindering the normal operation of power engineering construction.To more accurately predict the price of power transformer materials,this study proposes a method based on complementary ensemble empirical mode decomposition(CEEMD)and gated recurrent unit(GRU)network.First,the CEEMD decomposed the price series into multiple intrinsic mode functions(IMFs).Multiple IMFs were clustered to obtain several aggregated sequences based on the sample entropy of each IMF.Then,an empirical wavelet transform(EWT)was applied to the aggregation sequence with a large sample entropy,and the multiple subsequences obtained from the decomposition were predicted by the GRU model.The GRU model was used to directly predict the aggregation sequences with a small sample entropy.In this study,we used authentic historical pricing data for power transformer materials to validate the proposed approach.The empirical findings demonstrated the efficacy of our method across both datasets,with mean absolute percentage errors(MAPEs)of less than 1%and 3%.This approach holds a significant reference value for future research in the field of power transformer material price prediction. 展开更多
关键词 Power transformer material Price prediction Complementary ensemble empirical mode decomposition Gated recurrent unit Empirical wavelet transform
下载PDF
TransTM:A device-free method based on time-streaming multiscale transformer for human activity recognition
17
作者 Yi Liu Weiqing Huang +4 位作者 Shang Jiang Bobai Zhao Shuai Wang Siye Wang Yanfang Zhang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期619-628,共10页
RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still... RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published. 展开更多
关键词 Human activity recognition RFId transformer
下载PDF
Time and Space Efficient Multi-Model Convolution Vision Transformer for Tomato Disease Detection from Leaf Images with Varied Backgrounds
18
作者 Ankita Gangwar Vijaypal Singh Dhaka +3 位作者 Geeta Rani Shrey Khandelwal Ester Zumpano Eugenio Vocaturo 《Computers, Materials & Continua》 SCIE EI 2024年第4期117-142,共26页
A consumption of 46.9 million tons of processed tomatoes was reported in 2022 which is merely 20%of the total consumption.An increase of 3.3%in consumption is predicted from 2024 to 2032.Tomatoes are also rich in iron... A consumption of 46.9 million tons of processed tomatoes was reported in 2022 which is merely 20%of the total consumption.An increase of 3.3%in consumption is predicted from 2024 to 2032.Tomatoes are also rich in iron,potassium,antioxidant lycopene,vitamins A,C and K which are important for preventing cancer,and maintaining blood pressure and glucose levels.Thus,tomatoes are globally important due to their widespread usage and nutritional value.To face the high demand for tomatoes,it is mandatory to investigate the causes of crop loss and minimize them.Diseases are one of the major causes that adversely affect crop yield and degrade the quality of the tomato fruit.This leads to financial losses and affects the livelihood of farmers.Therefore,automatic disease detection at any stage of the tomato plant is a critical issue.Deep learning models introduced in the literature show promising results,but the models are difficult to implement on handheld devices such as mobile phones due to high computational costs and a large number of parameters.Also,most of the models proposed so far work efficiently for images with plain backgrounds where a clear demarcation exists between the background and leaf region.Moreover,the existing techniques lack in recognizing multiple diseases on the same leaf.To address these concerns,we introduce a customized deep learning-based convolution vision transformer model.Themodel achieves an accuracy of 93.51%for classifying tomato leaf images with plain as well as complex backgrounds into 13 categories.It requires a space storage of merely 5.8 MB which is 98.93%,98.33%,and 92.64%less than stateof-the-art visual geometry group,vision transformers,and convolution vision transformermodels,respectively.Its training time of 44 min is 51.12%,74.12%,and 57.7%lower than the above-mentioned models.Thus,it can be deployed on(Internet of Things)IoT-enabled devices,drones,or mobile devices to assist farmers in the real-time monitoring of tomato crops.The periodicmonitoring promotes timely action to prevent the spread of diseases and reduce crop loss. 展开更多
关键词 TOMATO disease transformer deep learning mobile devices
下载PDF
SMSTracker:A Self-Calibration Multi-Head Self-Attention Transformer for Visual Object Tracking
19
作者 Zhongyang Wang Hu Zhu Feng Liu 《Computers, Materials & Continua》 SCIE EI 2024年第7期605-623,共19页
Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have becom... Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications. 展开更多
关键词 Visual object tracking tensor decomposition transformer self-attention
下载PDF
An Enhanced Multiview Transformer for Population Density Estimation Using Cellular Mobility Data in Smart City
20
作者 Yu Zhou Bosong Lin +1 位作者 Siqi Hu Dandan Yu 《Computers, Materials & Continua》 SCIE EI 2024年第4期161-182,共22页
This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating populatio... This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating population figures and studying their movement,thereby implying significant contributions to urban planning.However,existing research grapples with issues pertinent to preprocessing base station data and the modeling of population prediction.To address this,we propose methodologies for preprocessing cellular station data to eliminate any irregular or redundant data.The preprocessing reveals a distinct cyclical characteristic and high-frequency variation in population shift.Further,we devise a multi-view enhancement model grounded on the Transformer(MVformer),targeting the improvement of the accuracy of extended time-series population predictions.Comparative experiments,conducted on the above-mentioned population dataset using four alternate Transformer-based models,indicate that our proposedMVformer model enhances prediction accuracy by approximately 30%for both univariate and multivariate time-series prediction assignments.The performance of this model in tasks pertaining to population prediction exhibits commendable results. 展开更多
关键词 Population density estimation smart city transformer multiview learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部