期刊文献+
共找到37篇文章
< 1 2 >
每页显示 20 50 100
Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism
1
作者 陈诺 王绍宇 +3 位作者 陆然 李文萱 覃志东 石秀金 《Journal of Donghua University(English Edition)》 CAS 2023年第6期661-666,共6页
Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.Th... Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task. 展开更多
关键词 clothing parsing convolutional neural network multi-scale fusion self-attention mechanism vision Transformer
下载PDF
Hierarchical multihead self-attention for time-series-based fault diagnosis
2
作者 Chengtian Wang Hongbo Shi +1 位作者 Bing Song Yang Tao 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第6期104-117,共14页
Fault diagnosis is important for maintaining the safety and effectiveness of chemical process.Considering the multivariate,nonlinear,and dynamic characteristic of chemical process,many time-series-based data-driven fa... Fault diagnosis is important for maintaining the safety and effectiveness of chemical process.Considering the multivariate,nonlinear,and dynamic characteristic of chemical process,many time-series-based data-driven fault diagnosis methods have been developed in recent years.However,the existing methods have the problem of long-term dependency and are difficult to train due to the sequential way of training.To overcome these problems,a novel fault diagnosis method based on time-series and the hierarchical multihead self-attention(HMSAN)is proposed for chemical process.First,a sliding window strategy is adopted to construct the normalized time-series dataset.Second,the HMSAN is developed to extract the time-relevant features from the time-series process data.It improves the basic self-attention model in both width and depth.With the multihead structure,the HMSAN can pay attention to different aspects of the complicated chemical process and obtain the global dynamic features.However,the multiple heads in parallel lead to redundant information,which cannot improve the diagnosis performance.With the hierarchical structure,the redundant information is reduced and the deep local time-related features are further extracted.Besides,a novel many-to-one training strategy is introduced for HMSAN to simplify the training procedure and capture the long-term dependency.Finally,the effectiveness of the proposed method is demonstrated by two chemical cases.The experimental results show that the proposed method achieves a great performance on time-series industrial data and outperforms the state-of-the-art approaches. 展开更多
关键词 self-attention mechanism Deep learning Chemical process Time-series Fault diagnosis
下载PDF
Keyphrase Generation Based on Self-Attention Mechanism
3
作者 Kehua Yang Yaodong Wang +2 位作者 Wei Zhang Jiqing Yao Yuquan Le 《Computers, Materials & Continua》 SCIE EI 2019年第8期569-581,共13页
Keyphrase greatly provides summarized and valuable information.This information can help us not only understand text semantics,but also organize and retrieve text content effectively.The task of automatically generati... Keyphrase greatly provides summarized and valuable information.This information can help us not only understand text semantics,but also organize and retrieve text content effectively.The task of automatically generating it has received considerable attention in recent decades.From the previous studies,we can see many workable solutions for obtaining keyphrases.One method is to divide the content to be summarized into multiple blocks of text,then we rank and select the most important content.The disadvantage of this method is that it cannot identify keyphrase that does not include in the text,let alone get the real semantic meaning hidden in the text.Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text,but the inherently sequential nature precludes parallelization within training examples,and distances have limitations on context dependencies.Previous works have demonstrated the benefits of the self-attention mechanism,which can learn global text dependency features and can be parallelized.Inspired by the above observation,we propose a keyphrase generation model,which is based entirely on the self-attention mechanism.It is an encoder-decoder model that can make up the above disadvantage effectively.In addition,we also consider the semantic similarity between keyphrases,and add semantic similarity processing module into the model.This proposed model,which is demonstrated by empirical analysis on five datasets,can achieve competitive performance compared to baseline methods. 展开更多
关键词 Keyphrase generation self-attention mechanism encoder-decoder framework
下载PDF
Using Recurrent Neural Network Structure and Multi-Head Attention with Convolution for Fraudulent Phone Text Recognition
4
作者 Junjie Zhou Hongkui Xu +3 位作者 Zifeng Zhang Jiangkun Lu Wentao Guo Zhenye Li 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2277-2297,共21页
Fraud cases have been a risk in society and people’s property security has been greatly threatened.In recent studies,many promising algorithms have been developed for social media offensive text recognition as well a... Fraud cases have been a risk in society and people’s property security has been greatly threatened.In recent studies,many promising algorithms have been developed for social media offensive text recognition as well as sentiment analysis.These algorithms are also suitable for fraudulent phone text recognition.Compared to these tasks,the semantics of fraudulent words are more complex and more difficult to distinguish.Recurrent Neural Networks(RNN),the variants ofRNN,ConvolutionalNeuralNetworks(CNN),and hybrid neural networks to extract text features are used by most text classification research.However,a single network or a simple network combination cannot obtain rich characteristic knowledge of fraudulent phone texts relatively.Therefore,a new model is proposed in this paper.In the fraudulent phone text,the knowledge that can be learned by the model includes the sequence structure of sentences,the correlation between words,the correlation of contextual semantics,the feature of keywords in sentences,etc.The new model combines a bidirectional Long-Short Term Memory Neural Network(BiLSTM)or a bidirectional Gate Recurrent United(BiGRU)and a Multi-Head attention mechanism module with convolution.A normalization layer is added after the output of the final hidden layer.BiLSTM or BiGRU is used to build the encoding and decoding layer.Multi-head attention mechanism module with convolution(MHAC)enhances the ability of the model to learn global interaction information and multi-granularity local interaction information in fraudulent sentences.A fraudulent phone text dataset is produced by us in this paper.The THUCNews data sets and fraudulent phone text data sets are used in experiments.Experiment results show that compared with the baseline model,the proposed model(LMHACL)has the best experiment results in terms of Accuracy,Precision,Recall,and F1 score on the two data sets.And the performance indexes on fraudulent phone text data sets are all above 0.94. 展开更多
关键词 BiLSTM BiGRU multi-head attention mechanism CNN
下载PDF
Discharge Summaries Based Sentiment Detection Using Multi-Head Attention and CNN-BiGRU
5
作者 Samer Abdulateef Waheeb 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期981-998,共18页
Automatic extraction of the patient’s health information from the unstructured data concerning the discharge summary remains challenging.Discharge summary related documents contain various aspects of the patient heal... Automatic extraction of the patient’s health information from the unstructured data concerning the discharge summary remains challenging.Discharge summary related documents contain various aspects of the patient health condition to examine the quality of treatment and thereby help improve decision-making in the medical field.Using a sentiment dictionary and feature engineering,the researchers primarily mine semantic text features.However,choosing and designing features requires a lot of manpower.The proposed approach is an unsupervised deep learning model that learns a set of clusters embedded in the latent space.A composite model including Active Learning(AL),Convolutional Neural Network(CNN),BiGRU,and Multi-Attention,called ACBMA in this research,is designed to measure the quality of treatment based on discharge summaries text sentiment detection.CNN is utilized for extracting the set of local features of text vectors.Then BiGRU network was utilized to extract the text’s global features to solve the issues that a single CNN cannot obtain global semantic information and the traditional Recurrent Neural Network(RNN)gradient disappearance.Experiments prove that the ACBMA method can demonstrate the effectiveness of the suggested method,achieve comparable results to state-of-arts methods in sentiment detection,and outperform them with accurate benchmarks.Finally,several algorithm studies ultimately determined that the ACBMA method is more precise for discharge summaries sentiment analysis. 展开更多
关键词 Sentiment analysis LEXICON discharge summaries active learning multi-head attention mechanism
下载PDF
A New Industrial Intrusion Detection Method Based on CNN-BiLSTM
6
作者 Jun Wang Changfu Si +1 位作者 Zhen Wang Qiang Fu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4297-4318,共22页
Nowadays,with the rapid development of industrial Internet technology,on the one hand,advanced industrial control systems(ICS)have improved industrial production efficiency.However,there are more and more cyber-attack... Nowadays,with the rapid development of industrial Internet technology,on the one hand,advanced industrial control systems(ICS)have improved industrial production efficiency.However,there are more and more cyber-attacks targeting industrial control systems.To ensure the security of industrial networks,intrusion detection systems have been widely used in industrial control systems,and deep neural networks have always been an effective method for identifying cyber attacks.Current intrusion detection methods still suffer from low accuracy and a high false alarm rate.Therefore,it is important to build a more efficient intrusion detection model.This paper proposes a hybrid deep learning intrusion detection method based on convolutional neural networks and bidirectional long short-term memory neural networks(CNN-BiLSTM).To address the issue of imbalanced data within the dataset and improve the model’s detection capabilities,the Synthetic Minority Over-sampling Technique-Edited Nearest Neighbors(SMOTE-ENN)algorithm is applied in the preprocessing phase.This algorithm is employed to generate synthetic instances for the minority class,simultaneously mitigating the impact of noise in the majority class.This approach aims to create a more equitable distribution of classes,thereby enhancing the model’s ability to effectively identify patterns in both minority and majority classes.In the experimental phase,the detection performance of the method is verified using two data sets.Experimental results show that the accuracy rate on the CICIDS-2017 data set reaches 97.7%.On the natural gas pipeline dataset collected by Lan Turnipseed from Mississippi State University in the United States,the accuracy rate also reaches 85.5%. 展开更多
关键词 Intrusion detection convolutional neural network bidirectional long short-term memory neural network multi-head self-attention mechanism
下载PDF
Geological information prediction for shield machine using an enhanced multi-head self-attention convolution neural network with two-stage feature extraction 被引量:3
7
作者 Chengjin Qin Guoqiang Huang +3 位作者 Honggan Yu Ruihong Wu Jianfeng Tao Chengliang Liu 《Geoscience Frontiers》 SCIE CAS CSCD 2023年第2期86-104,共19页
Due to the closed working environment of shield machines,the construction personnel cannot observe the construction geological environment,which seriously restricts the safety and efficiency of the tunneling process.I... Due to the closed working environment of shield machines,the construction personnel cannot observe the construction geological environment,which seriously restricts the safety and efficiency of the tunneling process.In this study,we present an enhanced multi-head self-attention convolution neural network(EMSACNN)with two-stage feature extraction for geological condition prediction of shield machine.Firstly,we select 30 important parameters according to statistical analysis method and the working principle of the shield machine.Then,we delete the non-working sample data,and combine 10 consecutive data as the input of the model.Thereafter,to deeply mine and extract essential and relevant features,we build a novel model combined with the particularity of the geological type recognition task,in which an enhanced multi-head self-attention block is utilized as the first feature extractor to fully extract the correlation of geological information of adjacent working face of tunnel,and two-dimensional CNN(2dCNN)is utilized as the second feature extractor.The performance and superiority of proposed EMSACNN are verified by the actual data collected by the shield machine used in the construction of a double-track tunnel in Guangzhou,China.The results show that EMSACNN achieves at least 96%accuracy on the test sets of the two tunnels,and all the evaluation indicators of EMSACNN are much better than those of classical AI model and the model that use only the second-stage feature extractor.Therefore,the proposed EMSACNN achieves high accuracy and strong generalization for geological information prediction of shield machine,which is of great guiding significance to engineering practice. 展开更多
关键词 Geological information prediction Shield machine Enhanced multi-head self-attention CNN
原文传递
Detecting APT-Exploited Processes through Semantic Fusion and Interaction Prediction
8
作者 Bin Luo Liangguo Chen +1 位作者 Shuhua Ruan Yonggang Luo 《Computers, Materials & Continua》 SCIE EI 2024年第2期1731-1754,共24页
Considering the stealthiness and persistence of Advanced Persistent Threats(APTs),system audit logs are leveraged in recent studies to construct system entity interaction provenance graphs to unveil threats in a host.... Considering the stealthiness and persistence of Advanced Persistent Threats(APTs),system audit logs are leveraged in recent studies to construct system entity interaction provenance graphs to unveil threats in a host.Rule-based provenance graph APT detection approaches require elaborate rules and cannot detect unknown attacks,and existing learning-based approaches are limited by the lack of available APT attack samples or generally only perform graph-level anomaly detection,which requires lots of manual efforts to locate attack entities.This paper proposes an APT-exploited process detection approach called ThreatSniffer,which constructs the benign provenance graph from attack-free audit logs,fits normal system entity interactions and then detects APT-exploited processes by predicting the rationality of entity interactions.Firstly,ThreatSniffer understands system entities in terms of their file paths,interaction sequences,and the number distribution of interaction types and uses the multi-head self-attention mechanism to fuse these semantics.Then,based on the insight that APT-exploited processes interact with system entities they should not invoke,ThreatSniffer performs negative sampling on the benign provenance graph to generate non-existent edges,thus characterizing irrational entity interactions without requiring APT attack samples.At last,it employs a heterogeneous graph neural network as the interaction prediction model to aggregate the contextual information of entity interactions,and locate processes exploited by attackers,thereby achieving fine-grained APT detection.Evaluation results demonstrate that anomaly-based detection enables ThreatSniffer to identify all attack activities.Compared to the node-level APT detection method APT-KGL,ThreatSniffer achieves a 6.1%precision improvement because of its comprehensive understanding of entity semantics. 展开更多
关键词 Advanced persistent threat provenance graph multi-head self-attention graph neural network
下载PDF
Multi-scale persistent spatiotemporal transformer for long-term urban traffic flow prediction
9
作者 Jia-Jun Zhong Yong Ma +3 位作者 Xin-Zheng Niu Philippe Fournier-Viger Bing Wang Zu-kuan Wei 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第1期53-69,共17页
Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial... Long-term urban traffic flow prediction is an important task in the field of intelligent transportation,as it can help optimize traffic management and improve travel efficiency.To improve prediction accuracy,a crucial issue is how to model spatiotemporal dependency in urban traffic data.In recent years,many studies have adopted spatiotemporal neural networks to extract key information from traffic data.However,most models ignore the semantic spatial similarity between long-distance areas when mining spatial dependency.They also ignore the impact of predicted time steps on the next unpredicted time step for making long-term predictions.Moreover,these models lack a comprehensive data embedding process to represent complex spatiotemporal dependency.This paper proposes a multi-scale persistent spatiotemporal transformer(MSPSTT)model to perform accurate long-term traffic flow prediction in cities.MSPSTT adopts an encoder-decoder structure and incorporates temporal,periodic,and spatial features to fully embed urban traffic data to address these issues.The model consists of a spatiotemporal encoder and a spatiotemporal decoder,which rely on temporal,geospatial,and semantic space multi-head attention modules to dynamically extract temporal,geospatial,and semantic characteristics.The spatiotemporal decoder combines the context information provided by the encoder,integrates the predicted time step information,and is iteratively updated to learn the correlation between different time steps in the broader time range to improve the model’s accuracy for long-term prediction.Experiments on four public transportation datasets demonstrate that MSPSTT outperforms the existing models by up to 9.5%on three common metrics. 展开更多
关键词 Graph neural network multi-head attention mechanism Spatio-temporal dependency Traffic flow prediction
下载PDF
NFHP-RN:AMethod of Few-Shot Network Attack Detection Based on the Network Flow Holographic Picture-ResNet
10
作者 Tao Yi Xingshu Chen +2 位作者 Mingdong Yang Qindong Li Yi Zhu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期929-955,共27页
Due to the rapid evolution of Advanced Persistent Threats(APTs)attacks,the emergence of new and rare attack samples,and even those never seen before,make it challenging for traditional rule-based detection methods to ... Due to the rapid evolution of Advanced Persistent Threats(APTs)attacks,the emergence of new and rare attack samples,and even those never seen before,make it challenging for traditional rule-based detection methods to extract universal rules for effective detection.With the progress in techniques such as transfer learning and meta-learning,few-shot network attack detection has progressed.However,challenges in few-shot network attack detection arise from the inability of time sequence flow features to adapt to the fixed length input requirement of deep learning,difficulties in capturing rich information from original flow in the case of insufficient samples,and the challenge of high-level abstract representation.To address these challenges,a few-shot network attack detection based on NFHP(Network Flow Holographic Picture)-RN(ResNet)is proposed.Specifically,leveraging inherent properties of images such as translation invariance,rotation invariance,scale invariance,and illumination invariance,network attack traffic features and contextual relationships are intuitively represented in NFHP.In addition,an improved RN network model is employed for high-level abstract feature extraction,ensuring that the extracted high-level abstract features maintain the detailed characteristics of the original traffic behavior,regardless of changes in background traffic.Finally,a meta-learning model based on the self-attention mechanism is constructed,achieving the detection of novel APT few-shot network attacks through the empirical generalization of high-level abstract feature representations of known-class network attack behaviors.Experimental results demonstrate that the proposed method can learn high-level abstract features of network attacks across different traffic detail granularities.Comparedwith state-of-the-artmethods,it achieves favorable accuracy,precision,recall,and F1 scores for the identification of unknown-class network attacks through cross-validation onmultiple datasets. 展开更多
关键词 APT attacks spatial pyramid pooling NFHP(network flow holo-graphic picture) ResNet self-attention mechanism META-LEARNING
下载PDF
Intelligent Fault Diagnosis Method of Rolling Bearings Based on Transfer Residual Swin Transformer with Shifted Windows
11
作者 Haomiao Wang Jinxi Wang +4 位作者 Qingmei Sui Faye Zhang Yibin Li Mingshun Jiang Phanasindh Paitekul 《Structural Durability & Health Monitoring》 EI 2024年第2期91-110,共20页
Due to their robust learning and expression ability for complex features,the deep learning(DL)model plays a vital role in bearing fault diagnosis.However,since there are fewer labeled samples in fault diagnosis,the de... Due to their robust learning and expression ability for complex features,the deep learning(DL)model plays a vital role in bearing fault diagnosis.However,since there are fewer labeled samples in fault diagnosis,the depth of DL models in fault diagnosis is generally shallower than that of DL models in other fields,which limits the diagnostic performance.To solve this problem,a novel transfer residual Swin Transformer(RST)is proposed for rolling bearings in this paper.RST has 24 residual self-attention layers,which use the hierarchical design and the shifted window-based residual self-attention.Combined with transfer learning techniques,the transfer RST model uses pre-trained parameters from ImageNet.A new end-to-end method for fault diagnosis based on deep transfer RST is proposed.Firstly,wavelet transform transforms the vibration signal into a wavelet time-frequency diagram.The signal’s time-frequency domain representation can be represented simultaneously.Secondly,the wavelet time-frequency diagram is the input of the RST model to obtain the fault type.Finally,our method is verified on public and self-built datasets.Experimental results show the superior performance of our method by comparing it with a shallow neural network. 展开更多
关键词 Rolling bearing fault diagnosis TRANSFORMER self-attention mechanism
下载PDF
An Affective EEG Analysis Method Without Feature Engineering
12
作者 Jian Zhang Chunying Fang +1 位作者 Yanghao Wu Mingjie Chang 《Journal of Electronic Research and Application》 2024年第1期36-45,共10页
Emotional electroencephalography(EEG)signals are a primary means of recording emotional brain activity.Currently,the most effective methods for analyzing emotional EEG signals involve feature engineering and neural ne... Emotional electroencephalography(EEG)signals are a primary means of recording emotional brain activity.Currently,the most effective methods for analyzing emotional EEG signals involve feature engineering and neural networks.However,neural networks possess a strong ability for automatic feature extraction.Is it possible to discard feature engineering and directly employ neural networks for end-to-end recognition?Based on the characteristics of EEG signals,this paper proposes an end-to-end feature extraction and classification method for a dynamic self-attention network(DySAT).The study reveals significant differences in brain activity patterns associated with different emotions across various experimenters and time periods.The results of this experiment can provide insights into the reasons behind these differences. 展开更多
关键词 Dynamic graph classification self-attention mechanism Dynamic self-attention network SEED dataset
下载PDF
基于Multi-head Attention和Bi-LSTM的实体关系分类 被引量:11
13
作者 刘峰 高赛 +1 位作者 于碧辉 郭放达 《计算机系统应用》 2019年第6期118-124,共7页
关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采... 关系分类是自然语言处理领域的一项重要任务,能够为知识图谱的构建、问答系统和信息检索等提供技术支持.与传统关系分类方法相比较,基于神经网络和注意力机制的关系分类模型在各种关系分类任务中都获得了更出色的表现.以往的模型大多采用单层注意力机制,特征表达相对单一.因此本文在已有研究基础上,引入多头注意力机制(Multi-head attention),旨在让模型从不同表示空间上获取关于句子更多层面的信息,提高模型的特征表达能力.同时在现有的词向量和位置向量作为网络输入的基础上,进一步引入依存句法特征和相对核心谓词依赖特征,其中依存句法特征包括当前词的依存关系值和所依赖的父节点位置,从而使模型进一步获取更多的文本句法信息.在SemEval-2010 任务8 数据集上的实验结果证明,该方法相较之前的深度学习模型,性能有进一步提高. 展开更多
关键词 关系分类 Bi-LSTM 句法特征 self-attention multi-head ATTENTION
下载PDF
Fiber communication receiver models based on the multi-head attention mechanism
14
作者 臧裕斌 于振明 +3 位作者 徐坤 陈明华 杨四刚 陈宏伟 《Chinese Optics Letters》 SCIE EI CAS CSCD 2023年第3期29-34,共6页
In this paper,an artificial-intelligence-based fiber communication receiver model is put forward.With the multi-head attention mechanism it contains,this model can extract crucial patterns and map the transmitted sign... In this paper,an artificial-intelligence-based fiber communication receiver model is put forward.With the multi-head attention mechanism it contains,this model can extract crucial patterns and map the transmitted signals into the bit stream.Once appropriately trained,it can obtain the ability to restore the information from the signals whose transmission distances range from 0 to 100 km,signal-to-noise ratios range from 0 to 20 dB,modulation formats range from OOK to PAM4,and symbol rates range from 10 to 40 GBaud.The validity of the model is numerically demonstrated via MATLAB and Pytorch scenarios and compared with traditional communication receivers. 展开更多
关键词 fiber receiver model neural networks multi-head attention mechanism
原文传递
融合底层信息的电气工程领域神经机器翻译 被引量:1
15
作者 陈媛 陈红 《河南科技大学学报(自然科学版)》 CAS 北大核心 2023年第6期42-48,M0004,M0005,共9页
针对目前主流的神经机器翻译模型Transformer内部结构单元堆叠而造成的底层信息丢失和多层单元输出信息偏差不同的问题,对其结构进行了改进,提出了一种融合底层信息的神经机器翻译模型。采用多种网络结构对源语言进行底层信息的特征提取... 针对目前主流的神经机器翻译模型Transformer内部结构单元堆叠而造成的底层信息丢失和多层单元输出信息偏差不同的问题,对其结构进行了改进,提出了一种融合底层信息的神经机器翻译模型。采用多种网络结构对源语言进行底层信息的特征提取,并采用残差连接的方式实现底层信息的向上传递。实验结果显示:融合底层信息后的翻译模型在电气工程领域内的双语评估研究(BLEU)值最多提升了2.47个百分点。 展开更多
关键词 神经机器翻译 电气工程 底层信息 multi-head self-attention
下载PDF
circ2CBA: prediction of circRNA-RBP binding sites combining deep learning and attention mechanism 被引量:1
16
作者 Yajing GUO Xiujuan LEI +1 位作者 Lian LIU Yi PAN 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第5期217-225,共9页
Circular RNAs(circRNAs)are RNAs with closed circular structure involved in many biological processes by key interactions with RNA binding proteins(RBPs).Existing methods for predicting these interactions have limitati... Circular RNAs(circRNAs)are RNAs with closed circular structure involved in many biological processes by key interactions with RNA binding proteins(RBPs).Existing methods for predicting these interactions have limitations in feature learning.In view of this,we propose a method named circ2CBA,which uses only sequence information of circRNAs to predict circRNA-RBP binding sites.We have constructed a data set which includes eight sub-datasets.First,circ2CBA encodes circRNA sequences using the one-hot method.Next,a two-layer convolutional neural network(CNN)is used to initially extract the features.After CNN,circ2CBA uses a layer of bidirectional long and short-term memory network(BiLSTM)and the self-attention mechanism to learn the features.The AUC value of circ2CBA reaches 0.8987.Comparison of circ2CBA with other three methods on our data set and an ablation experiment confirm that circ2CBA is an effective method to predict the binding sites between circRNAs and RBPs. 展开更多
关键词 circRNAs RBPs CNN BiLSTM self-attention mechanism
原文传递
3D Object Detection with Attention:Shell-Based Modeling
17
作者 Xiaorui Zhang Ziquan Zhao +1 位作者 Wei Sun Qi Cui 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期537-550,共14页
LIDAR point cloud-based 3D object detection aims to sense the surrounding environment by anchoring objects with the Bounding Box(BBox).However,under the three-dimensional space of autonomous driving scenes,the previou... LIDAR point cloud-based 3D object detection aims to sense the surrounding environment by anchoring objects with the Bounding Box(BBox).However,under the three-dimensional space of autonomous driving scenes,the previous object detection methods,due to the pre-processing of the original LIDAR point cloud into voxels or pillars,lose the coordinate information of the original point cloud,slow detection speed,and gain inaccurate bounding box positioning.To address the issues above,this study proposes a new two-stage network structure to extract point cloud features directly by PointNet++,which effectively preserves the original point cloud coordinate information.To improve the detection accuracy,a shell-based modeling method is proposed.It roughly determines which spherical shell the coordinates belong to.Then,the results are refined to ground truth,thereby narrowing the localization range and improving the detection accuracy.To improve the recall of 3D object detection with bounding boxes,this paper designs a self-attention module for 3D object detection with a skip connection structure.Some of these features are highlighted by weighting them on the feature dimensions.After training,it makes the feature weights that are favorable for object detection get larger.Thus,the extracted features are more adapted to the object detection task.Extensive comparison experiments and ablation experiments conducted on the KITTI dataset verify the effectiveness of our proposed method in improving recall and precision. 展开更多
关键词 3D object detection autonomous driving point cloud shell-based modeling self-attention mechanism
下载PDF
Research on Multi-Modal Time Series Data Prediction Method Based on Dual-Stage Attention Mechanism
18
作者 Xinyu Liu Yulong Meng +4 位作者 Fangwei Liu Lingyu Chen Xinfeng Zhang Junyu Lin Husheng Gou 《国际计算机前沿大会会议论文集》 EI 2023年第1期127-144,共18页
The production data in the industrialfield have the characteristics of multimodality,high dimensionality and large correlation differences between attributes.Existing data prediction methods cannot effectively capture ... The production data in the industrialfield have the characteristics of multimodality,high dimensionality and large correlation differences between attributes.Existing data prediction methods cannot effectively capture time series and modal features,which leads to prediction hysteresis and poor prediction stabil-ity.Aiming at the above problems,this paper proposes a time-series and modal fea-tureenhancementmethodbasedonadual-stageself-attentionmechanism(DATT),and a time series prediction method based on a gated feedforward recurrent unit(GFRU).On this basis,the DATT-GFRU neural network with a gated feedforward recurrent neural network and dual-stage self-attention mechanism is designed and implemented.Experiments show that the prediction effect of the neural network prediction model based on DATT is significantly improved.Compared with the traditional prediction model,the DATT-GFRU neural network has a smaller aver-age error of model prediction results,stable prediction performance,and strong generalization ability on the three datasets with different numbers of attributes and different training sample sizes. 展开更多
关键词 Multi-modal time series data Recurrent neural network self-attention mechanism
原文传递
融合多头自注意力机制的中文分类方法 被引量:7
19
作者 熊漩 严佩敏 《电子测量技术》 2020年第10期125-130,共6页
中文文本分类任务中,深度学习神经网络方法具有自动提取特征、特征表达能力强的优势,但其模型可解释性不强。提出了一种Text-CNN+Multi-Head Attention模型,引入多头自注意力机制克服Text-CNN可解释性的不足。首先采用Text-CNN神经网络... 中文文本分类任务中,深度学习神经网络方法具有自动提取特征、特征表达能力强的优势,但其模型可解释性不强。提出了一种Text-CNN+Multi-Head Attention模型,引入多头自注意力机制克服Text-CNN可解释性的不足。首先采用Text-CNN神经网络,高效提取文本局部特征信息;然后通过引入多头自注意力机制,最大限度发挥Text-CNN的并行运算能力,强调文本序列全局信息的捕捉;最后在时间和空间上完成对文本信息的特征提取。实验结果表明,提出的模型较其他模型在保证运算速度的同时,准确率提升了1%~2%。 展开更多
关键词 中文文本分类 Text-CNN multi-head self-attention
下载PDF
An Innovative Approach Utilizing Binary-View Transformer for Speech Recognition Task 被引量:3
20
作者 Muhammad Babar Kamal Arfat Ahmad Khan +5 位作者 Faizan Ahmed Khan Malik Muhammad Ali Shahid Chitapong Wechtaisong Muhammad Daud Kamal Muhammad Junaid Ali Peerapong Uthansakul 《Computers, Materials & Continua》 SCIE EI 2022年第9期5547-5562,共16页
The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small seque... The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small sequence data,but suffers from the gradient vanishing problem in case of large sequence.The transformer networks have neutralized this issue and have shown state-of-the-art results on sequential or speech-related data.Generally,in speech recognition,the input audio is converted into an image using Mel-spectrogram to illustrate frequencies and intensities.The image is classified by the machine learning mechanism to generate a classification transcript.However,the audio frequency in the image has low resolution and causing inaccurate predictions.This paper presents a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution problem.Firstly,the input audio signal is transformed into a 2D image using Mel-spectrogram.Secondly,the modified universal transformers utilize the multi-head attention to derive contextual information and derive different speech-related features.Moreover,a feedforward neural network is also deployed for classification.The proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16%and with minimal loss.The binary-view transformer eradicates the eventuality of the over-fitting problem by deploying a multiview mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map. 展开更多
关键词 Convolution neural network multi-head attention MULTI-VIEW RNN self-attention speech recognition TRANSFORMER
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部