期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Geological information prediction for shield machine using an enhanced multi-head self-attention convolution neural network with two-stage feature extraction 被引量:3
1
作者 Chengjin Qin Guoqiang Huang +3 位作者 Honggan Yu Ruihong Wu Jianfeng Tao Chengliang Liu 《Geoscience Frontiers》 SCIE CAS CSCD 2023年第2期86-104,共19页
Due to the closed working environment of shield machines,the construction personnel cannot observe the construction geological environment,which seriously restricts the safety and efficiency of the tunneling process.I... Due to the closed working environment of shield machines,the construction personnel cannot observe the construction geological environment,which seriously restricts the safety and efficiency of the tunneling process.In this study,we present an enhanced multi-head self-attention convolution neural network(EMSACNN)with two-stage feature extraction for geological condition prediction of shield machine.Firstly,we select 30 important parameters according to statistical analysis method and the working principle of the shield machine.Then,we delete the non-working sample data,and combine 10 consecutive data as the input of the model.Thereafter,to deeply mine and extract essential and relevant features,we build a novel model combined with the particularity of the geological type recognition task,in which an enhanced multi-head self-attention block is utilized as the first feature extractor to fully extract the correlation of geological information of adjacent working face of tunnel,and two-dimensional CNN(2dCNN)is utilized as the second feature extractor.The performance and superiority of proposed EMSACNN are verified by the actual data collected by the shield machine used in the construction of a double-track tunnel in Guangzhou,China.The results show that EMSACNN achieves at least 96%accuracy on the test sets of the two tunnels,and all the evaluation indicators of EMSACNN are much better than those of classical AI model and the model that use only the second-stage feature extractor.Therefore,the proposed EMSACNN achieves high accuracy and strong generalization for geological information prediction of shield machine,which is of great guiding significance to engineering practice. 展开更多
关键词 Geological information prediction Shield machine Enhanced multi-head self-attention CNN
原文传递
Detecting APT-Exploited Processes through Semantic Fusion and Interaction Prediction
2
作者 Bin Luo Liangguo Chen +1 位作者 Shuhua Ruan Yonggang Luo 《Computers, Materials & Continua》 SCIE EI 2024年第2期1731-1754,共24页
Considering the stealthiness and persistence of Advanced Persistent Threats(APTs),system audit logs are leveraged in recent studies to construct system entity interaction provenance graphs to unveil threats in a host.... Considering the stealthiness and persistence of Advanced Persistent Threats(APTs),system audit logs are leveraged in recent studies to construct system entity interaction provenance graphs to unveil threats in a host.Rule-based provenance graph APT detection approaches require elaborate rules and cannot detect unknown attacks,and existing learning-based approaches are limited by the lack of available APT attack samples or generally only perform graph-level anomaly detection,which requires lots of manual efforts to locate attack entities.This paper proposes an APT-exploited process detection approach called ThreatSniffer,which constructs the benign provenance graph from attack-free audit logs,fits normal system entity interactions and then detects APT-exploited processes by predicting the rationality of entity interactions.Firstly,ThreatSniffer understands system entities in terms of their file paths,interaction sequences,and the number distribution of interaction types and uses the multi-head self-attention mechanism to fuse these semantics.Then,based on the insight that APT-exploited processes interact with system entities they should not invoke,ThreatSniffer performs negative sampling on the benign provenance graph to generate non-existent edges,thus characterizing irrational entity interactions without requiring APT attack samples.At last,it employs a heterogeneous graph neural network as the interaction prediction model to aggregate the contextual information of entity interactions,and locate processes exploited by attackers,thereby achieving fine-grained APT detection.Evaluation results demonstrate that anomaly-based detection enables ThreatSniffer to identify all attack activities.Compared to the node-level APT detection method APT-KGL,ThreatSniffer achieves a 6.1%precision improvement because of its comprehensive understanding of entity semantics. 展开更多
关键词 Advanced persistent threat provenance graph multi-head self-attention graph neural network
下载PDF
A New Industrial Intrusion Detection Method Based on CNN-BiLSTM
3
作者 Jun Wang Changfu Si +1 位作者 Zhen Wang Qiang Fu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4297-4318,共22页
Nowadays,with the rapid development of industrial Internet technology,on the one hand,advanced industrial control systems(ICS)have improved industrial production efficiency.However,there are more and more cyber-attack... Nowadays,with the rapid development of industrial Internet technology,on the one hand,advanced industrial control systems(ICS)have improved industrial production efficiency.However,there are more and more cyber-attacks targeting industrial control systems.To ensure the security of industrial networks,intrusion detection systems have been widely used in industrial control systems,and deep neural networks have always been an effective method for identifying cyber attacks.Current intrusion detection methods still suffer from low accuracy and a high false alarm rate.Therefore,it is important to build a more efficient intrusion detection model.This paper proposes a hybrid deep learning intrusion detection method based on convolutional neural networks and bidirectional long short-term memory neural networks(CNN-BiLSTM).To address the issue of imbalanced data within the dataset and improve the model’s detection capabilities,the Synthetic Minority Over-sampling Technique-Edited Nearest Neighbors(SMOTE-ENN)algorithm is applied in the preprocessing phase.This algorithm is employed to generate synthetic instances for the minority class,simultaneously mitigating the impact of noise in the majority class.This approach aims to create a more equitable distribution of classes,thereby enhancing the model’s ability to effectively identify patterns in both minority and majority classes.In the experimental phase,the detection performance of the method is verified using two data sets.Experimental results show that the accuracy rate on the CICIDS-2017 data set reaches 97.7%.On the natural gas pipeline dataset collected by Lan Turnipseed from Mississippi State University in the United States,the accuracy rate also reaches 85.5%. 展开更多
关键词 Intrusion detection convolutional neural network bidirectional long short-term memory neural network multi-head self-attention mechanism
下载PDF
基于Multi-head Attention和Bi-LSTM的实体关系分类 被引量:11
4
作者 刘峰 高赛 +1 位作者 于碧辉 郭放达 《计算机系统应用》 2019年第6期118-124,共7页
关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采... 关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采用单层注意力机制,特征表达相对单一.因此本文在已有研究基础上,引入多头注意力机制(Multi-head attention),旨在让模型从不同表示空间上获取关于句子更多层面的信息,提高模型的特征表达能力.同时在现有的词向量和位置向量作为网络输入的基础上,进一步引入依存句法特征和相对核心谓词依赖特征,其中依存句法特征包括当前词的依存关系值和所依赖的父节点位置,从而使模型进一步获取更多的文本句法信息.在SemEval-2010 任务8 数据集上的实验结果证明,该方法相较之前的深度学习模型,性能有进一步提高. 展开更多
关键词 关系分类 Bi-LSTM 句法特征 self-attention multi-head ATTENTION
下载PDF
融合底层信息的电气工程领域神经机器翻译 被引量:1
5
作者 陈媛 陈红 《河南科技大学学报(自然科学版)》 CAS 北大核心 2023年第6期42-48,M0004,M0005,共9页
针对目前主流的神经机器翻译模型Transformer内部结构单元堆叠而造成的底层信息丢失和多层单元输出信息偏差不同的问题,对其结构进行了改进,提出了一种融合底层信息的神经机器翻译模型。采用多种网络结构对源语言进行底层信息的特征提取... 针对目前主流的神经机器翻译模型Transformer内部结构单元堆叠而造成的底层信息丢失和多层单元输出信息偏差不同的问题,对其结构进行了改进,提出了一种融合底层信息的神经机器翻译模型。采用多种网络结构对源语言进行底层信息的特征提取,并采用残差连接的方式实现底层信息的向上传递。实验结果显示:融合底层信息后的翻译模型在电气工程领域内的双语评估研究(BLEU)值最多提升了2.47个百分点。 展开更多
关键词 神经机器翻译 电气工程 底层信息 multi-head self-attention
下载PDF
融合多头自注意力机制的中文分类方法 被引量:7
6
作者 熊漩 严佩敏 《电子测量技术》 2020年第10期125-130,共6页
中文文本分类任务中,深度学习神经网络方法具有自动提取特征、特征表达能力强的优势,但其模型可解释性不强。提出了一种Text-CNN+Multi-Head Attention模型,引入多头自注意力机制克服Text-CNN可解释性的不足。首先采用Text-CNN神经网络... 中文文本分类任务中,深度学习神经网络方法具有自动提取特征、特征表达能力强的优势,但其模型可解释性不强。提出了一种Text-CNN+Multi-Head Attention模型,引入多头自注意力机制克服Text-CNN可解释性的不足。首先采用Text-CNN神经网络,高效提取文本局部特征信息;然后通过引入多头自注意力机制,最大限度发挥Text-CNN的并行运算能力,强调文本序列全局信息的捕捉;最后在时间和空间上完成对文本信息的特征提取。实验结果表明,提出的模型较其他模型在保证运算速度的同时,准确率提升了1%~2%。 展开更多
关键词 中文文本分类 Text-CNN multi-head self-attention
下载PDF
An Innovative Approach Utilizing Binary-View Transformer for Speech Recognition Task 被引量:3
7
作者 Muhammad Babar Kamal Arfat Ahmad Khan +5 位作者 Faizan Ahmed Khan Malik Muhammad Ali Shahid Chitapong Wechtaisong Muhammad Daud Kamal Muhammad Junaid Ali Peerapong Uthansakul 《Computers, Materials & Continua》 SCIE EI 2022年第9期5547-5562,共16页
The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small seque... The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small sequence data,but suffers from the gradient vanishing problem in case of large sequence.The transformer networks have neutralized this issue and have shown state-of-the-art results on sequential or speech-related data.Generally,in speech recognition,the input audio is converted into an image using Mel-spectrogram to illustrate frequencies and intensities.The image is classified by the machine learning mechanism to generate a classification transcript.However,the audio frequency in the image has low resolution and causing inaccurate predictions.This paper presents a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution problem.Firstly,the input audio signal is transformed into a 2D image using Mel-spectrogram.Secondly,the modified universal transformers utilize the multi-head attention to derive contextual information and derive different speech-related features.Moreover,a feedforward neural network is also deployed for classification.The proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16%and with minimal loss.The binary-view transformer eradicates the eventuality of the over-fitting problem by deploying a multiview mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map. 展开更多
关键词 Convolution neural network multi-head attention MULTI-VIEW RNN self-attention speech recognition TRANSFORMER
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部