期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
RepBoTNet-CESA:An Alzheimer’s Disease Computer Aided Diagnosis Method Using Structural Reparameterization BoTNet and Cubic Embedding Self Attention
1
作者 Xiabin Zhang Zhongyi Hu +1 位作者 Lei Xiao Hui Huang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2879-2905,共27页
Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on l... Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks. 展开更多
关键词 Alzheimer CNN structural reparameterization multi head self attention computer aided diagnosis
下载PDF
Short‐term and long‐term memory self‐attention network for segmentation of tumours in 3D medical images
2
作者 Mingwei Wen Quan Zhou +3 位作者 Bo Tao Pavel Shcherbakov Yang Xu Xuming Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1524-1537,共14页
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap... Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS. 展开更多
关键词 3D medical images convolutional neural network selfattention network TRANSFORMER tumor segmentation
下载PDF
Ext-ICAS:A Novel Self-Normalized Extractive Intra Cosine Attention Similarity Summarization
3
作者 P.Sharmila C.Deisy S.Parthasarathy 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期377-393,共17页
With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex... With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations. 展开更多
关键词 Abstractive summarization natural language processing sequence-tosequence learning(seq2seq) self-NORMALIZATION intra(self)attention
下载PDF
基于改进的Transformer_decoder的增强图像描述
4
作者 林椹尠 屈嘉欣 罗亮 《计算机与现代化》 2023年第1期7-12,共6页
Transformer的解码器(Transformer_decoder)模型已被广泛应用于图像描述任务中,其中自注意力机制(Self Attention)通过捕获细粒度的特征来实现更深层次的图像理解。本文对Self Attention机制进行2方面改进,包括视觉增强注意力机制(Visio... Transformer的解码器(Transformer_decoder)模型已被广泛应用于图像描述任务中,其中自注意力机制(Self Attention)通过捕获细粒度的特征来实现更深层次的图像理解。本文对Self Attention机制进行2方面改进,包括视觉增强注意力机制(Vision-Boosted Attention,VBA)和相对位置注意力机制(Relative-Position Attention,RPA)。视觉增强注意力机制为Transformer_decoder添加VBA层,将视觉特征作为辅助信息引入Self Attention模型中,指导解码器模型生成与图像内容更匹配的描述语义。相对位置注意力机制在Self Attention的基础上,引入可训练的相对位置参数,为输入序列添加词与词之间的相对位置关系。基于COCO2014进行实验,结果表明VBA和RPA这2种注意力机制对图像描述任务都有一定改进,且2种注意力机制相结合的解码器模型有更好的语义表述效果。 展开更多
关键词 图像描述 Transformer模型 self attention机制 相对位置注意力机制 视觉增强注意力机制
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部