期刊文献+
共找到8,551篇文章
< 1 2 250 >
每页显示 20 50 100
ConvNeXt网络及Stacked BiLSTM-Self-Attention在轴承剩余寿命预测中的应用
1
作者 张印文 王琳霖 +1 位作者 薛文科 梁文婕 《机电工程》 CAS 北大核心 2024年第11期1977-1985,1994,共10页
在滚动轴承剩余使用寿命预测方面,采用传统方法时存在鲁棒性差、精度低等各种问题。近些年来深度学习的发展为解决这些问题提供了新的思路。为了进一步提高对轴承寿命的预测精度,提出了一种基于ConvNeXt网络、堆叠双向长短时记忆网络(SB... 在滚动轴承剩余使用寿命预测方面,采用传统方法时存在鲁棒性差、精度低等各种问题。近些年来深度学习的发展为解决这些问题提供了新的思路。为了进一步提高对轴承寿命的预测精度,提出了一种基于ConvNeXt网络、堆叠双向长短时记忆网络(SBiLSTM)和自注意力机制(Self-Attention)的滚动轴承寿命预测方法。首先,采用连续小波变换(CWT)构造了振动信号的时频图,以更好地捕捉信号的时域和频域特征;然后,将得到的时频图输入到构建的ConvNeXt网络中,通过卷积、池化和层归一化等操作,对时频图的关键特征进行了提取;最后,将提取后的特征输入到SBiLSTM-Self-Attention模块中,进一步提取了时序信息和特征权重分配数据,利用PHM2012挑战数据集进行了验证,通过实验分析了该方法的均方根误差(RMSE)和平均绝对误差(MAE)。研究结果表明:相较于现有技术方法,该方法的平均RMSE为0.031;与其他三种方法,即卷积神经网络(CNN)、深度残差双向门控循环单元(DRN-BiGRU)和深度卷积自注意力双向门控循环单元(DCNN-Self-Attention-BiGRU)相比,其平均RMSE值分别下降了79%、74%和55%,MAE值分别下降了78%、73%和53%,说明该方法在滚动轴承剩余寿命预测中有较好的性能。 展开更多
关键词 滚动轴承 剩余寿命预测 ConvNeXt网络 堆叠双向长短时记忆网络 自注意力机制 深度学习 连续小波变换
下载PDF
Multi-Head Attention Spatial-Temporal Graph Neural Networks for Traffic Forecasting
2
作者 Xiuwei Hu Enlong Yu Xiaoyu Zhao 《Journal of Computer and Communications》 2024年第3期52-67,共16页
Accurate traffic prediction is crucial for an intelligent traffic system (ITS). However, the excessive non-linearity and complexity of the spatial-temporal correlation in traffic flow severely limit the prediction acc... Accurate traffic prediction is crucial for an intelligent traffic system (ITS). However, the excessive non-linearity and complexity of the spatial-temporal correlation in traffic flow severely limit the prediction accuracy of most existing models, which simply stack temporal and spatial modules and fail to capture spatial-temporal features effectively. To improve the prediction accuracy, a multi-head attention spatial-temporal graph neural network (MSTNet) is proposed in this paper. First, the traffic data is decomposed into unique time spans that conform to positive rules, and valuable traffic node attributes are mined through an adaptive graph structure. Second, time and spatial features are captured using a multi-head attention spatial-temporal module. Finally, a multi-step prediction module is used to achieve future traffic condition prediction. Numerical experiments were conducted on an open-source dataset, and the results demonstrate that MSTNet performs well in spatial-temporal feature extraction and achieves more positive forecasting results than the baseline methods. 展开更多
关键词 Traffic Prediction Intelligent Traffic System Multi-Head attention Graph Neural networks
下载PDF
Stacked Attention Networks for Referring Expressions Comprehension
3
作者 Yugang Li Haibo Sun +2 位作者 Zhe Chen Yudan Ding Siqi Zhou 《Computers, Materials & Continua》 SCIE EI 2020年第12期2529-2541,共13页
Referring expressions comprehension is the task of locating the image region described by a natural language expression,which refer to the properties of the region or the relationships with other regions.Most previous... Referring expressions comprehension is the task of locating the image region described by a natural language expression,which refer to the properties of the region or the relationships with other regions.Most previous work handles this problem by selecting the most relevant regions from a set of candidate regions,when there are many candidate regions in the set these methods are inefficient.Inspired by recent success of image captioning by using deep learning methods,in this paper we proposed a framework to understand the referring expressions by multiple steps of reasoning.We present a model for referring expressions comprehension by selecting the most relevant region directly from the image.The core of our model is a recurrent attention network which can be seen as an extension of Memory Network.The proposed model capable of improving the results by multiple computational hops.We evaluate the proposed model on two referring expression datasets:Visual Genome and Flickr30k Entities.The experimental results demonstrate that the proposed model outperform previous state-of-the-art methods both in accuracy and efficiency.We also conduct an ablation experiment to show that the performance of the model is not getting better with the increase of the attention layers. 展开更多
关键词 stacked attention networks referring expressions visual relationship deep learning
下载PDF
基于CNN-BiGRU-Attention的短期电力负荷预测 被引量:2
4
作者 任爽 杨凯 +3 位作者 商继财 祁继明 魏翔宇 蔡永根 《电气工程学报》 CSCD 北大核心 2024年第1期344-350,共7页
针对目前电力负荷数据随机性强,影响因素复杂,传统单一预测模型精度低的问题,结合卷积神经网络(Convolutional neural network,CNN)、双向门控循环单元(Bi-directional gated recurrent unit,BiGRU)以及注意力机制(Attention)在短期电... 针对目前电力负荷数据随机性强,影响因素复杂,传统单一预测模型精度低的问题,结合卷积神经网络(Convolutional neural network,CNN)、双向门控循环单元(Bi-directional gated recurrent unit,BiGRU)以及注意力机制(Attention)在短期电力负荷预测上的不同优点,提出一种基于CNN-BiGRU-Attention的混合预测模型。该方法首先通过CNN对历史负荷和气象数据进行初步特征提取,然后利用BiGRU进一步挖掘特征数据间时序关联,再引入注意力机制,对BiGRU输出状态给与不同权重,强化关键特征,最后完成负荷预测。试验结果表明,该模型的平均绝对百分比误差(Mean absolute percentage error,MAPE)、均方根误差(Root mean square error,RMSE)、判定系数(R-square,R~2)分别为0.167%、0.057%、0.993,三项指标明显优于其他模型,具有更高的预测精度和稳定性,验证了模型在短期负荷预测中的优势。 展开更多
关键词 卷积神经网络 双向门控循环单元 注意力机制 短期电力负荷预测 混合预测模型
下载PDF
基于CNN-LSTM-Attention的月生活需水预测研究
5
作者 陈星 沈紫菡 +1 位作者 许钦 蔡晶 《三峡大学学报(自然科学版)》 CAS 北大核心 2024年第5期1-6,共6页
需水预测是进行水资源配置的重要部分,对于水资源合理开发利用和社会可持续发展有重要指导意义.本文以陕西省为研究区,结合大数据分析法,提出一种基于CNN-LSTM-Attention的月生活需水预测模型.首先,通过卷积神经网络(convolutional neur... 需水预测是进行水资源配置的重要部分,对于水资源合理开发利用和社会可持续发展有重要指导意义.本文以陕西省为研究区,结合大数据分析法,提出一种基于CNN-LSTM-Attention的月生活需水预测模型.首先,通过卷积神经网络(convolutional neural networks,CNN)提取数据动态变化特征,然后利用长短期记忆(long short-term memory,LSTM)网络对提取的特征进行学习训练,最后使用注意力(attention)机制分配LSTM隐含层不同权重,预测月生活需水量并对比实际数据.结果表明,CNN-LSTM-Attention模型的相对平均误差值和决定系数(R2)分别为2.54%、0.95,满足预测精度需求,相比于LSTM模型预测精度更高.进一步证明了模型预测的合理性,可为陕西省水资源规划提供指导. 展开更多
关键词 月尺度 需水预测 卷积神经网络 长短期记忆网络 注意力机制 因子筛选
下载PDF
基于SABO-GRU-Attention的锂电池SOC估计
6
作者 薛家祥 王凌云 《电源技术》 CAS 北大核心 2024年第11期2169-2173,共5页
提出一种基于SABO-GRU-Attention(subtraction average based optimizer-gate recurrent unitattention)的锂电池SOC(state of charge)估计方法。采用基于平均减法优化算法自适应更新GRU神经网络的超参数,融合SE(squeeze and excitation... 提出一种基于SABO-GRU-Attention(subtraction average based optimizer-gate recurrent unitattention)的锂电池SOC(state of charge)估计方法。采用基于平均减法优化算法自适应更新GRU神经网络的超参数,融合SE(squeeze and excitation)注意力机制自适应分配各通道权重,提高学习效率。对马里兰大学电池数据集进行预处理,输入电压、电流参数,进行锂电池充放电仿真实验,并搭建锂电池荷电状态实验平台进行储能锂电池充放电实验。结果表明,提出的SOC神经网络估计模型明显优于LSTM、GRU以及PSO-GRU等模型,具有较高的估计精度与应用价值。 展开更多
关键词 SOC估计 SABO算法 GRU神经网络 attention机制
下载PDF
融合RoBERTa-GCN-Attention的隐喻识别与情感分类模型
7
作者 杨春霞 韩煜 +1 位作者 桂强 陈启岗 《小型微型计算机系统》 CSCD 北大核心 2024年第3期576-583,共8页
在隐喻识别与隐喻情感分类任务的联合研究中,现有多任务学习模型存在对隐喻语料中的上下文语义信息和句法结构信息提取不够准确,并且缺乏对粗细两种粒度信息同时捕捉的问题.针对第1个问题,首先改进了传统的RoBERTa模型,在原有的自注意... 在隐喻识别与隐喻情感分类任务的联合研究中,现有多任务学习模型存在对隐喻语料中的上下文语义信息和句法结构信息提取不够准确,并且缺乏对粗细两种粒度信息同时捕捉的问题.针对第1个问题,首先改进了传统的RoBERTa模型,在原有的自注意力机制中引入上下文信息,以此提取上下文中重要的隐喻语义特征;其次在句法依存树上使用图卷积网络提取隐喻句中的句法结构信息.针对第2个问题,使用双层注意力机制,分别聚焦于单词和句子层面中对隐喻识别和情感分类有贡献的特征信息.在两类任务6个数据集上的对比实验结果表明,该模型相比基线模型性能均有提升. 展开更多
关键词 隐喻识别 情感分类 多任务学习 RoBERTa 图卷积网络 注意力机制
下载PDF
基于IGWO-Attention-GRU的短期电力负荷预测模型
8
作者 徐利美 贺卫华 +2 位作者 李远 朱燕芳 续欣莹 《信息技术》 2024年第12期101-108,共8页
为了提高短期电力负荷的预测精度,针对电力负荷序列波动性强、复杂性高的特点,综合考虑气象因素及日期类型的影响,文中提出一种基于改进灰狼优化算法(IGWO)优化Attention-GRU网络的短期电力负荷预测模型。首先,构建Attention-GRU网络;其... 为了提高短期电力负荷的预测精度,针对电力负荷序列波动性强、复杂性高的特点,综合考虑气象因素及日期类型的影响,文中提出一种基于改进灰狼优化算法(IGWO)优化Attention-GRU网络的短期电力负荷预测模型。首先,构建Attention-GRU网络;其次,对灰狼优化算法(GWO)进行改进,并利用IGWO寻找Attention-GRU网络的超参数;最后,使用IGWO-Attention-GRU模型在电力负荷数据集上进行实验,并与多种预测模型进行比较。实验结果表明,IGWO-Attention-GRU模型的MAPE、RMSE和MAE值均为各种预测模型中最低,验证了IGWO-Attention-GRU模型的优越性。 展开更多
关键词 短期电力负荷预测 GRU网络 attention机制 改进灰狼优化算法 超参数寻优
下载PDF
Multi-Feature Fusion-Guided Multiscale Bidirectional Attention Networks for Logistics Pallet Segmentation 被引量:1
9
作者 Weiwei Cai Yaping Song +2 位作者 Huan Duan Zhenwei Xia Zhanguo Wei 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第6期1539-1555,共17页
In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by... In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by humans.Therefore,they play a critical role in smart warehousing,and semantics segmentation is an effective method to realize the intelligent identification of logistics pallets.However,most current recognition algorithms are ineffective due to the diverse types of pallets,their complex shapes,frequent blockades in production environments,and changing lighting conditions.This paper proposes a novel multi-feature fusion-guided multiscale bidirectional attention(MFMBA)neural network for logistics pallet segmentation.To better predict the foreground category(the pallet)and the background category(the cargo)of a pallet image,our approach extracts three types of features(grayscale,texture,and Hue,Saturation,Value features)and fuses them.The multiscale architecture deals with the problem that the size and shape of the pallet may appear different in the image in the actual,complex environment,which usually makes feature extraction difficult.Our study proposes a multiscale architecture that can extract additional semantic features.Also,since a traditional attention mechanism only assigns attention rights from a single direction,we designed a bidirectional attention mechanism that assigns cross-attention weights to each feature from two directions,horizontally and vertically,significantly improving segmentation.Finally,comparative experimental results show that the precision of the proposed algorithm is 0.53%–8.77%better than that of other methods we compared. 展开更多
关键词 Logistics pallet segmentation image segmentation multi-feature fusion multiscale network bidirectional attention mechanism HSV neural networks deep learning
下载PDF
Combing Type-Aware Attention and Graph Convolutional Networks for Event Detection 被引量:1
10
作者 Kun Ding Lu Xu +5 位作者 Ming Liu Xiaoxiong Zhang Liu Liu Daojian Zeng Yuting Liu Chen Jin 《Computers, Materials & Continua》 SCIE EI 2023年第1期641-654,共14页
Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word m... Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word most clearly expressing event occurrence.Thus,current approaches require both annotated triggers as well as event types in training data.Nevertheless,triggers are non-essential in ED,and it is time-wasting for annotators to identify the“most clearly”word from a sentence,particularly in longer sentences.To decrease manual effort,we evaluate event detectionwithout triggers.We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks(TA-GCN)for event detection.Specifically,the task is identified as a multi-label classification problem.We first encode the input sentence using a novel type-aware neural network with attention mechanisms.Then,a Graph Convolutional Networks(GCN)-based multilabel classification model is exploited for event detection.Experimental results demonstrate the effectiveness. 展开更多
关键词 Event detection information extraction type-aware attention graph convolutional networks
下载PDF
DHSEGATs:distance and hop-wise structures encoding enhanced graph attention networks 被引量:1
11
作者 HUANG Zhiguo 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第2期350-359,共10页
Numerous works prove that existing neighbor-averaging graph neural networks(GNNs)cannot efficiently catch structure features,and many works show that injecting structure,distance,position,or spatial features can signi... Numerous works prove that existing neighbor-averaging graph neural networks(GNNs)cannot efficiently catch structure features,and many works show that injecting structure,distance,position,or spatial features can significantly improve the performance of GNNs,however,injecting high-level structure and distance into GNNs is an intuitive but untouched idea.This work sheds light on this issue and proposes a scheme to enhance graph attention networks(GATs)by encoding distance and hop-wise structure statistics.Firstly,the hop-wise structure and distributional distance information are extracted based on several hop-wise ego-nets of every target node.Secondly,the derived structure information,distance information,and intrinsic features are encoded into the same vector space and then added together to get initial embedding vectors.Thirdly,the derived embedding vectors are fed into GATs,such as GAT and adaptive graph diffusion network(AGDN)to get the soft labels.Fourthly,the soft labels are fed into correct and smooth(C&S)to conduct label propagation and get final predictions.Experiments show that the distance and hop-wise structures encoding enhanced graph attention networks(DHSEGATs)achieve a competitive result. 展开更多
关键词 graph attention network(GAT) graph structure information label propagation
下载PDF
Attention U-Net在雷达信号图像化分选中的应用研究
12
作者 郭立民 张鹤韬 +2 位作者 莫禹涵 于飒宁 胡懿真 《舰船电子对抗》 2024年第3期78-83,95,共7页
针对海战场复杂电磁环境对雷达信号分选的挑战,采用改进的U-Net网络结合注意力机制提出新的分选方法。首先,将脉冲描述字转化为图像序列以适应深度学习处理。通过优化U-Net架构,融入注意力机制,有效提升模型对关键脉冲特征的识别与提取... 针对海战场复杂电磁环境对雷达信号分选的挑战,采用改进的U-Net网络结合注意力机制提出新的分选方法。首先,将脉冲描述字转化为图像序列以适应深度学习处理。通过优化U-Net架构,融入注意力机制,有效提升模型对关键脉冲特征的识别与提取能力,实现像素级分类。通过此方法,系统能够精准搜索并归类所有雷达脉冲。实验证明,在海战场复杂电磁环境中,该方法显著提升了雷达信号分选准确率,提供了一种应对强干扰环境下的高效解决方案。这一研究成果证实了Attention U-Net在雷达信号智能分选中的优越性和实用性。 展开更多
关键词 雷达信号分选 U-Net网络 注意力机制 脉冲描述字
下载PDF
HDAM:Heuristic Difference Attention Module for Convolutional Neural Networks 被引量:2
13
作者 Yu Xue Ziming Yuan 《Journal on Internet of Things》 2022年第1期57-67,共11页
The attention mechanism is one of the most important priori knowledge to enhance convolutional neural networks.Most attention mechanisms are bound to the convolutional layer and use local or global contextual informat... The attention mechanism is one of the most important priori knowledge to enhance convolutional neural networks.Most attention mechanisms are bound to the convolutional layer and use local or global contextual information to recalibrate the input.This is a popular attention strategy design method.Global contextual information helps the network to consider the overall distribution,while local contextual information is more general.The contextual information makes the network pay attention to the mean or maximum value of a particular receptive field.Different from the most attention mechanism,this article proposes a novel attention mechanism with the heuristic difference attention module(HDAM).HDAM’s input recalibration is based on the difference between the local and global contextual information instead of the mean and maximum values.At the same time,to make different layers have amore suitable local receptive field sizes and increase the flexibility of the local receptive field design,we use genetic algorithm to heuristically produce local receptive fields.First,HDAM extracts the mean value of the global and local receptive fields as the corresponding contextual information.Then the difference between the global and local contextual information is calculated.Finally,HDAM uses this difference to recalibrate the input.In addition,we use the heuristic ability of genetic algorithm to search for the local receptive field size of each layer.Our experiments on CIFAR-10 and CIFAR-100 show that HDAM can use fewer parameters than other attention mechanisms to achieve higher accuracy.We implement HDAM with the Python library,Pytorch,and the code and models will be publicly available. 展开更多
关键词 attention mechanism convolutional neural network genetic algorithm
下载PDF
基于SSA-CG-Attention模型的多因素采煤工作面涌水量预测 被引量:1
14
作者 丁莹莹 尹尚先 +6 位作者 连会青 刘伟 李启兴 祁荣荣 卜昌森 夏向学 李书乾 《煤田地质与勘探》 EI CAS CSCD 北大核心 2024年第4期111-119,共9页
矿井工作面涌水量预测对确保矿山安全、优化资源配置、提高工作效率等都具有重要作用。为提高预测结果的准确性和稳定性,基于钻孔水位和微震能量数据与涌水量的强关联性,选择其作为多因素特征变量,提出SSA-CG-Attention多因素矿井工作... 矿井工作面涌水量预测对确保矿山安全、优化资源配置、提高工作效率等都具有重要作用。为提高预测结果的准确性和稳定性,基于钻孔水位和微震能量数据与涌水量的强关联性,选择其作为多因素特征变量,提出SSA-CG-Attention多因素矿井工作面涌水量预测模型。该模型在门控循环单元(GatedRecurrentUnit,GRU)提取时序特征的基础上,与卷积神经网络(ConvolutionalNeuralNet-work,CNN)融合形成新的网络结构提取数据的有效非线性局部特征,并且加入注意力机制(Atten-tion),在预测过程中将注意力集中在输入元素上,提高模型的准确性。最后通过麻雀搜索算法(Spar-row Search Algorithm,SSA)优化模型参数,避免局部最优解的问题。将提出的模型分别与传统的BP神经网络、LSTM、GRU单因素涌水量预测模型以及MLP、SLP、SVR、LSTM、GRU、SSA-LSTM、SSA-GRU多因素涌水量预测模型的预测结果进行对比分析,结果表明:SSA算法以最少迭代次数快速寻优,避免了局部最优解的缺陷;SSA-CG-Attention多因素涌水量预测模型整体预测指标绝对误差(E_(MA))、均方根误差(E_(RMS))以及平均绝对百分比误差(E_(MAP))分别为5.24 m^(3)/h、7.25 m^(3)/h、6%,指标方差和为8.90。相较于其他预测模型预测精度更高,相较于单因素涌水量预测模型,多因素涌水量预测模型预测结果更加稳定。研究结果为矿井工作面涌水量预测提供了新的思路与方法,对矿井工作面涌水量预测及防控有着借鉴与指导作用,具有一定的理论价值和现实意义。 展开更多
关键词 涌水量预测 卷积神经网络 门控循环单元 注意力机制 多因素预测 微震能量
下载PDF
Self-attention transfer networks for speech emotion recognition 被引量:4
15
作者 Ziping ZHAO Keru Wang +6 位作者 Zhongtian BAO Zixing ZHANG Nicholas CUMMINS Shihuang SUN Haishuai WANG Jianhua TAO Björn WSCHULLER 《Virtual Reality & Intelligent Hardware》 2021年第1期43-54,共12页
Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in s... Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model. 展开更多
关键词 Speech emotion recognition attention transfer Self-attention Temporal convolutional neural networks(TCNs)
下载PDF
Workout Action Recognition in Video Streams Using an Attention Driven Residual DC-GRU Network 被引量:1
16
作者 Arnab Dey Samit Biswas Dac-Nhuong Le 《Computers, Materials & Continua》 SCIE EI 2024年第5期3067-3087,共21页
Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions i... Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions in videostreams holds significant importance in computer vision research, as it aims to enhance exercise adherence, enableinstant recognition, advance fitness tracking technologies, and optimize fitness routines. However, existing actiondatasets often lack diversity and specificity for workout actions, hindering the development of accurate recognitionmodels. To address this gap, the Workout Action Video dataset (WAVd) has been introduced as a significantcontribution. WAVd comprises a diverse collection of labeled workout action videos, meticulously curated toencompass various exercises performed by numerous individuals in different settings. This research proposes aninnovative framework based on the Attention driven Residual Deep Convolutional-Gated Recurrent Unit (ResDCGRU)network for workout action recognition in video streams. Unlike image-based action recognition, videoscontain spatio-temporal information, making the task more complex and challenging. While substantial progresshas been made in this area, challenges persist in detecting subtle and complex actions, handling occlusions,and managing the computational demands of deep learning approaches. The proposed ResDC-GRU Attentionmodel demonstrated exceptional classification performance with 95.81% accuracy in classifying workout actionvideos and also outperformed various state-of-the-art models. The method also yielded 81.6%, 97.2%, 95.6%, and93.2% accuracy on established benchmark datasets, namely HMDB51, Youtube Actions, UCF50, and UCF101,respectively, showcasing its superiority and robustness in action recognition. The findings suggest practicalimplications in real-world scenarios where precise video action recognition is paramount, addressing the persistingchallenges in the field. TheWAVd dataset serves as a catalyst for the development ofmore robust and effective fitnesstracking systems and ultimately promotes healthier lifestyles through improved exercise monitoring and analysis. 展开更多
关键词 Workout action recognition video stream action recognition residual network GRU attention
下载PDF
基于CNN-LSTM-Attention的工业控制系统网络入侵检测方法研究
17
作者 李笛 杨东 +5 位作者 王文庆 邓楠轶 刘鹏飞 崔逸群 刘超飞 朱博迪 《热力发电》 CAS CSCD 北大核心 2024年第5期115-121,共7页
随着各类网络攻击事件的增加,能源电力基础设施中工业控制系统安全问题也逐渐成为人们关注的焦点。结合电力系统的特点,提出一种融合卷积神经网络(convolutional neural network,CNN)、长短时记忆(long short-term memory,LSTM)神经网... 随着各类网络攻击事件的增加,能源电力基础设施中工业控制系统安全问题也逐渐成为人们关注的焦点。结合电力系统的特点,提出一种融合卷积神经网络(convolutional neural network,CNN)、长短时记忆(long short-term memory,LSTM)神经网络和注意力(Attention)机制的CNN-LSTM-Attention网络入侵检测算法模型,通过在实验室仿真环境中构造和采集600 MW燃煤机组制粉系统在3种典型工况下受到网络攻击的运行状态数据集,对所提出的检测算法模型进行训练和评估。结果表明:相较于CNN、LSTM模型,所提出的入侵检测算法模型性能最优;模型准确率、精确率、召回率等评级指标均为最好,综合评价优于其他的入侵检测方法。该入侵检测算法模型具有较强的创新性和实用性。 展开更多
关键词 工业控制系统 网络入侵检测 CNN LSTM神经网络 注意力机制
下载PDF
Attention-relation network for mobile phone screen defect classification via a few samples 被引量:1
18
作者 Jiao Mao Guoliang Xu +1 位作者 Lijun He Jiangtao Luo 《Digital Communications and Networks》 SCIE CSCD 2024年第4期1113-1120,共8页
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro... How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages. 展开更多
关键词 Mobile phone screen defects A few samples Relation network attention mechanism Dilated convolution
下载PDF
融合MacBERT和Talking⁃Heads Attention实体关系联合抽取模型 被引量:1
19
作者 王春亮 姚洁仪 李昭 《现代电子技术》 北大核心 2024年第5期127-131,共5页
针对现有的医学文本关系抽取任务模型在训练过程中存在语义理解能力不足,可能导致关系抽取的效果不尽人意的问题,文中提出一种融合MacBERT和Talking⁃Heads Attention的实体关系联合抽取模型。该模型首先利用MacBERT语言模型来获取动态... 针对现有的医学文本关系抽取任务模型在训练过程中存在语义理解能力不足,可能导致关系抽取的效果不尽人意的问题,文中提出一种融合MacBERT和Talking⁃Heads Attention的实体关系联合抽取模型。该模型首先利用MacBERT语言模型来获取动态字向量表达,MacBERT作为改进的BERT模型,能够减少预训练和微调阶段之间的差异,从而提高模型的泛化能力;然后,将这些动态字向量表达输入到双向门控循环单元(BiGRU)中,以便提取文本的上下文特征。BiGRU是一种改进的循环神经网络(RNN),具有更好的长期依赖捕获能力。在获取文本上下文特征之后,使用Talking⁃Heads Attention来获取全局特征。Talking⁃Heads Attention是一种自注意力机制,可以捕获文本中不同位置之间的关系,从而提高关系抽取的准确性。实验结果表明,与实体关系联合抽取模型GRTE相比,该模型F1值提升1%,precision值提升0.4%,recall值提升1.5%。 展开更多
关键词 MacBERT BiGRU 关系抽取 医学文本 Talking⁃Heads attention 深度学习 全局特征 神经网络
下载PDF
Topic-Aware Abstractive Summarization Based on Heterogeneous Graph Attention Networks for Chinese Complaint Reports
20
作者 Yan Li Xiaoguang Zhang +4 位作者 Tianyu Gong Qi Dong Hailong Zhu Tianqiang Zhang Yanji Jiang 《Computers, Materials & Continua》 SCIE EI 2023年第9期3691-3705,共15页
Automatic text summarization(ATS)plays a significant role in Natural Language Processing(NLP).Abstractive summarization produces summaries by identifying and compressing the most important information in a document.Ho... Automatic text summarization(ATS)plays a significant role in Natural Language Processing(NLP).Abstractive summarization produces summaries by identifying and compressing the most important information in a document.However,there are only relatively several comprehensively evaluated abstractive summarization models that work well for specific types of reports due to their unstructured and oral language text characteristics.In particular,Chinese complaint reports,generated by urban complainers and collected by government employees,describe existing resident problems in daily life.Meanwhile,the reflected problems are required to respond speedily.Therefore,automatic summarization tasks for these reports have been developed.However,similar to traditional summarization models,the generated summaries still exist problems of informativeness and conciseness.To address these issues and generate suitably informative and less redundant summaries,a topic-based abstractive summarization method is proposed to obtain global and local features.Additionally,a heterogeneous graph of the original document is constructed using word-level and topic-level features.Experiments and analyses on public review datasets(Yelp and Amazon)and our constructed dataset(Chinese complaint reports)show that the proposed framework effectively improves the performance of the abstractive summarization model for Chinese complaint reports. 展开更多
关键词 Text summarization TOPIC Chinese complaint report heterogeneous graph attention network
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部