Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on l...Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.展开更多
Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shap...Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.展开更多
With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex...With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.展开更多
基金the Key Project of Zhejiang Provincial Natural Science Foundation under Grants LD21F020001,Z20F020022the National Natural Science Foundation of China under Grants 62072340,62076185the Major Project of Wenzhou Natural Science Foundation under Grants 2021HZSY0071,ZS2022001.
文摘Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.
基金supported by the National Key Research and Development Program of China under Grant No.2018YFE0206900the National Natural Science Foundation of China under Grant No.61871440 and CAAI‐Huawei Mind-Spore Open Fund.
文摘Tumour segmentation in medical images(especially 3D tumour segmentation)is highly challenging due to the possible similarity between tumours and adjacent tissues,occurrence of multiple tumours and variable tumour shapes and sizes.The popular deep learning‐based segmentation algorithms generally rely on the convolutional neural network(CNN)and Transformer.The former cannot extract the global image features effectively while the latter lacks the inductive bias and involves the complicated computation for 3D volume data.The existing hybrid CNN‐Transformer network can only provide the limited performance improvement or even poorer segmentation performance than the pure CNN.To address these issues,a short‐term and long‐term memory self‐attention network is proposed.Firstly,a distinctive self‐attention block uses the Transformer to explore the correlation among the region features at different levels extracted by the CNN.Then,the memory structure filters and combines the above information to exclude the similar regions and detect the multiple tumours.Finally,the multi‐layer reconstruction blocks will predict the tumour boundaries.Experimental results demonstrate that our method outperforms other methods in terms of subjective visual and quantitative evaluation.Compared with the most competitive method,the proposed method provides Dice(82.4%vs.76.6%)and Hausdorff distance 95%(HD95)(10.66 vs.11.54 mm)on the KiTS19 as well as Dice(80.2%vs.78.4%)and HD95(9.632 vs.12.17 mm)on the LiTS.
文摘With the continuous growth of online news articles,there arises the necessity for an efficient abstractive summarization technique for the problem of information overloading.Abstractive summarization is highly complex and requires a deeper understanding and proper reasoning to come up with its own summary outline.Abstractive summarization task is framed as seq2seq modeling.Existing seq2seq methods perform better on short sequences;however,for long sequences,the performance degrades due to high computation and hence a two-phase self-normalized deep neural document summarization model consisting of improvised extractive cosine normalization and seq2seq abstractive phases has been proposed in this paper.The novelty is to parallelize the sequence computation training by incorporating feed-forward,the self-normalized neural network in the Extractive phase using Intra Cosine Attention Similarity(Ext-ICAS)with sentence dependency position.Also,it does not require any normalization technique explicitly.Our proposed abstractive Bidirectional Long Short Term Memory(Bi-LSTM)encoder sequence model performs better than the Bidirectional Gated Recurrent Unit(Bi-GRU)encoder with minimum training loss and with fast convergence.The proposed model was evaluated on the Cable News Network(CNN)/Daily Mail dataset and an average rouge score of 0.435 was achieved also computational training in the extractive phase was reduced by 59%with an average number of similarity computations.